Národní úložiště šedé literatury Nalezeno 25,132 záznamů.  předchozí11 - 20dalšíkonec  přejít na záznam: Hledání trvalo 1.49 vteřin. 

STRAIN ENGINEERING OF THE ELECTRONIC STRUCTURE OF 2D MATERIALS
del Corro, Elena ; Peňa-Alvarez, M. ; Morales-García, A. ; Bouša, Milan ; Řáhová, Jaroslava ; Kavan, Ladislav ; Kalbáč, Martin ; Frank, Otakar
The research on graphene has attracted much attention since its first successful preparation in 2004. It possesses many unique properties, such as an extreme stiffness and strength, high electron mobility, ballistic transport even at room temperature, superior thermal conductivity and many others. The affection for graphene was followed swiftly by a keen interest in other two dimensional materials like transition metal dichalcogenides. As has been predicted and in part proven experimentally, the electronic properties of these materials can be modified by various means. The most common ones include covalent or non-covalent chemistry, electrochemical, gate or atomic doping, or quantum confinement. None of these methods has proven universal enough in terms of the devices' characteristics or scalability. However, another approach is known mechanical strain/stress, but experiments in that direction are scarce, in spite of their high promises.\nThe primary challenge consists in the understanding of the mechanical properties of 2D materials and in the ability to quantify the lattice deformation. Several techniques can be then used to apply strain to the specimens and thus to induce changes in their electronic structure. We will review their basic concepts and some of the examples so far documented experimentally and/or theoretically.

Modelling, parameter estimation, optimisation and control of transport and reaction processes in bioreactors.
ŠTUMBAUER, Václav
With the significant potential of microalgae as a major biofuel source of the future, a considerable scientific attention is attracted towards the field of biotechnology and bioprocess engineering. Nevertheless the current photobioreactor (PBR) design methods are still too empirical. With this work I would like to promote the idea of designing a production system, such as a PBR, completely \emph{in silico}, thus allowing for the in silico optimization and optimal control determination. The thesis deals with the PBR modeling and simulation. It addresses two crucial issues in the current state-of-the-art PBR modeling. The first issue relevant to the deficiency of the currently available models - the incorrect or insufficient treatment of either the transport process modeling, the reaction modeling or the coupling between these two models. A correct treatment of both the transport and the reaction phenomena is proposed in the thesis - in the form of a unified modeling framework consisting of three interconnected parts - (i) the state system, (ii) the fluid-dynamic model and (iii) optimal control determination. The proposed model structure allows prediction of the PBR performance with respect to the modelled PBR size, geometry, operating conditions or a particular microalgae strain. The proposed unified modeling approach is applied to the case of the Couette-Taylor photobioreactor (CTBR) where it is used for the optimal control solution. The PBR represents a complex multiscale problem and especially in the case of the production scale systems, the associated computational costs are paramount. This is the second crucial issue addressed in the thesis. With respect to the computational complexity, the fluid dynamics simulation is the most costly part of the PBR simulation. To model the fluid flow with the classical CFD (Computational Fluid Dynamics) methods inside a production scale PBR leads to an enormous grid size. This usually requires a parallel implementation of the solver but in the parallelization of the classical methods lies another relevant issue - that of the amount of data the individual nodes must interchange with each other. The thesis addresses the performance relevant issues by proposing and evaluation alternative approaches to the fluid flow simulation. These approaches are more suitable to the parallel implementation than the classical methods because of their rather local character in comparison to the classical methods - namely the Lattice Boltzmann Method (LBM) for fluid flow, which is the primary focus of the thesis in this regard and alternatively also the discrete random walk based method (DRW). As the outcome of the thesis I have developed and validated a new Lagrangian general modeling approach to the transport and reaction processes in PBR - a framework based on the Lattice Boltzmann method (LBM) and the model of the Photosynthetic Factory (PSF) that models correctly the transport and reaction processes and their coupling. Further I have implemented a software prototype based on the proposed modeling approach and validated this prototype on the case of the Coutte-Taylor PBR. I have also demonstrated that the modeling approach has a significant potential from the computational costs point of view by implementing and validating the software prototype on the parallel architecture of CUDA (Compute Unified Device Architecture). The current parallel implementation is approximately 20 times faster than the unparallized one and decreases thus significantly the iteration cycle of the PBR design process.

Vliv sněhové pokrývky na odtok během dešťových srážek.
Juras, Roman ; Máca, Petr (vedoucí práce) ; Ladislav , Ladislav (oponent)
V zimním období, kdy leží na povodí sněhová pokrývka, stále přibývá výskytu dešťových srážek. Déšť dopadající na sníh (ROS) má často za následek vznik povodní a mokrých lavin. Predikce vlivu ROS záleží především na lepším pochopení mechanismů vzniku a složení odtoku ze sněhové pokrývky. Spojení simulace deště na sněhovou pokrývku a využití stopovačů bylo testováno jako vhodný nástroj pro tento účel. Celkem bylo provedeno 18 experimentů na sněhovou pokrývku s různými počátečními vlastnostmi v horských podmínkách střední a západní Evropy. Pro určení charakteru proudění bylo použito barvivo brilliant blue (FCF), pomocí kterého je možné vizualizovat preferenční cesty, ale i určit rozhraní dvou vrstev o různých hydraulických vlastnostech. Zastoupení jednotlivých složek odtékající vody na výtoku bylo stanoveno pomocí metody separace hydrogramu, která poskytuje dobré výsledky s přijatelnou nejistotou. Z technických důvodů nebylo možné obě metody použít současně během jednoho experimentu, i když by to ještě více rozšířilo znalosti o dynamice proudění dešťové vody ve sněhové pokrývce. Množství tavné vody bylo vypočteno pomocí rovnice energetické bilance. Použití této rovnice je poměrně přesné, ale zároveň náročné na vstupy. Z toho důvodu bylo tání vypočteno pouze u jednoho experimentu. Rychlost vzniku odtoku roste v první řadě intenzitou srážky. Počáteční vlastnosti sněhové pokrývky, jako hustota a vlhkost, ovlivňují rychlost vzniku odtoku až druhotně. Na druhou stranu při stejné intenzitě srážky vykazovala nevyzrálá sněhová pokrývka s malou hustotou rychlejší hydrologickou odpověď, než vyzrálá pokrývka s větší hustotou. Velikost odtoku je závislá, především na počátečním nasycení. Vyzrálá sněhová pokrývka s vyšším počátečním nasycení generovala vyšší celkový odtok, kde dešťová voda přispívala maximálně z 50ti %. Proti tomu protekla dešťová voda nevyzrálou sněhovou pokrývkou poměrně rychle a do odtoku se propagovala přibližně z 80ti %. Pro predikci odtoku během ROS byla použita Richardsova rovnice v rámci modelu SNOWPACK. Tento model byl upraven tak, že byla sněhová matrice rozdělena pro lepší simulaci preferenčního proudění. Tento přístup přinesl zlepšení výsledků oproti klasickému přístupu, kdy se uvažuje pouze matricové proudění.

Navrhování experimentů pro nestacionární produkční procesy
Jadrná, Monika ; Macák, Tomáš (vedoucí práce)
Disertační práce se zaměřuje na oblast služeb a oblast hromadné výroby. Konkrétně se jedná o optimalizaci produktového portfolia cestovní kanceláře a optimalizaci výroby nábojů. V literárních východiscích je vysvětlena terminologie z oblasti rozhodování a popsány metody, které jsou používány pro podporu rozhodování. Jedná se o aktuální přehled řešené problematiky a definování základních pojmů. Teoretická východiska výzkumu jsou v oblasti služeb zaměřena na volbu vhodných vstupních proměnných. V oblasti výroby pak na volbu konkrétního materiálu a vhodného vybavení pro danou výrobu. Literární východiska a teoretická východiska výzkumu současně tvoří základ pro praktickou část práce. V praktické části disertační práce je zvolen konkrétní podnik působící v daném odvětví. V oblasti služeb je optimalizováno produktové portfolio pomocí Fuzzy logiky a Fuzzy množin, tak aby firma působící v dané oblasti byla schopna se uplatnit a fungovat na současném vysoce konkurenčním trhu. V oblasti výroby je nastaveno optimální složení produktu tak, aby bylo dosahováno jeho požadovaných vlastností. Hlavním cílem disertační práce je návrh metodického přístupu pro řízení vybraných podnikových procesů při jejich nestacionárním časovém průběhu. V praktické realizaci je cílem verifikovat funkčnost navrženého metodického přístupu, jak v oblasti služeb, tak v oblasti hromadné výroby.

New Methods for Increasing Efficiency and Speed of Functional Verification
Zachariášová, Marcela ; Dohnal, Jan (oponent) ; Steininger, Andreas (oponent) ; Kotásek, Zdeněk (vedoucí práce)
In the development of current hardware systems, e.g. embedded systems or computer hardware, new ways how to increase their reliability are highly investigated. One way how to tackle the issue of reliability is to increase the efficiency and the speed of verification processes that are performed in the early phases of the design cycle. In this Ph.D. thesis, the attention is focused on the verification approach called functional verification. Several challenges and problems connected with the efficiency and the speed of functional verification are identified and reflected in the goals of the Ph.D. thesis. The first goal focuses on the reduction of the simulation runtime when verifying complex hardware systems. The reason is that the simulation of inherently parallel hardware systems is very slow in comparison to the speed of real hardware. The optimization technique is proposed that moves the verified system into the FPGA acceleration board while the rest of the verification environment runs in simulation. By this single move, the simulation overhead can be significantly reduced. The second goal deals with manually written verification environments which represent a huge bottleneck in the verification productivity. However, it is not reasonable, because almost all verification environments have the same structure as they utilize libraries of basic components from the standard verification methodologies. They are only adjusted to the system that is verified. Therefore, the second optimization technique takes the high-level specification of the system and then automatically generates a comprehensive verification environment for this system. The third goal elaborates how the completeness of the verification process can be achieved using the intelligent automation. The completeness is measured by different coverage metrics and the verification is usually ended when a satisfying level of coverage is achieved. Therefore, the third optimization technique drives generation of input stimuli in order to activate multiple coverage points in the veri\-fied system and to enhance the overall coverage rate. As the main optimization tool the genetic algorithm is used, which is adopted for the functional verification purposes and its parameters are well-tuned for this domain. It is running in the background of the verification process, it analyses the coverage and it dynamically changes constraints of the stimuli generator. Constraints are represented by the probabilities using which particular values from the input domain are selected.       The fourth goal discusses the re-usability of verification stimuli for regression testing and how these stimuli can be further optimized in order to speed-up the testing. It is quite common in verification that until a satisfying level of coverage is achieved, many redundant stimuli are evaluated as they are produced by pseudo-random generators. However, when creating optimal regression suites, redundancy is not needed anymore and can be removed. At the same time, it is important to retain the same level of coverage in order to check all the key properties of the system. The fourth optimization technique is also based on the genetic algorithm, but it is not integrated into the verification process but works offline after the verification is ended. It removes the redundancy from the original suite of stimuli very fast and effectively so the resulting verification runtime of the regression suite is significantly improved.

Packet Classification Algorithms
Puš, Viktor ; Lhotka,, Ladislav (oponent) ; Dvořák, Václav (vedoucí práce)
This thesis deals with packet classification in computer networks. Classification is the key task in many networking devices, most notably packet filters - firewalls. This thesis therefore concerns the area of computer security. The thesis is focused on high-speed networks with the bandwidth of 100 Gb/s and beyond. General-purpose processors can not be used in such cases, because their performance is not sufficient. Therefore, specialized hardware is used, mainly ASICs and FPGAs. Many packet classification algorithms designed for hardware implementation were presented, yet these approaches are not ready for very high-speed networks. This thesis addresses the design of new high-speed packet classification algorithms, targeted for the implementation in dedicated hardware. The algorithm that decomposes the problem into several easier sub-problems is proposed. The first subproblem is the longest prefix match (LPM) operation, which is used also in IP packet routing. As the LPM algorithms with sufficient speed have already been published, they can be used in out context. The following subproblem is mapping the prefixes to the rule numbers. This is where the thesis brings innovation by using a specifically constructed hash function. This hash function allows the mapping to be done in constant time and requires only one memory with narrow data bus. The algorithm throughput can be determined analytically and is independent on the number of rules or the network traffic characteristics. With the use of available parts the throughput of 266 million packets per second can be achieved. Additional three algorithms (PFCA, PCCA, MSPCCA) that follow in this thesis are designed to lower the memory requirements of the first one without compromising the speed. The second algorithm lowers the memory size by 11 % to 96 %, depending on the rule set. The disadvantage of low stability is removed by the third algorithm, which reduces the memory requirements by 31 % to 84 %, compared to the first one. The fourth algorithm combines the third one with the older approach and thanks to the use of several techniques lowers the memory requirements by 73 % to 99 %.

Harnessing Forest Automata for Verification of Heap Manipulating Programs
Šimáček, Jiří ; Abdulla, Parosh (oponent) ; Křetínský, Mojmír (oponent) ; Vojnar, Tomáš (vedoucí práce)
This work addresses verification of infinite-state systems, more specifically, verification of programs manipulating complex dynamic linked data structures. Many different approaches emerged to date, but none of them provides a~sufficiently robust solution which would succeed in all possible scenarios appearing in practice. Therefore, in this work, we propose a new approach which aims at improving the current state of the art in several dimensions. Our approach is based on using tree automata, but it is also partially inspired by some ideas taken from the methods based on separation logic. Apart from that, we also present multiple advancements within the implementation of various tree automata operations, crucial for our verification method to succeed in practice. Namely, we provide an optimised algorithm for computing simulations over labelled transition systems which then translates into more efficient computation of simulations over tree automata. We also give a new algorithm for checking inclusion over tree automata, and we provide experimental evaluation demonstrating

Acceleration of Object Detection Using Classifiers
Juránek, Roman ; Kälviäinen, Heikki (oponent) ; Sojka, Eduard (oponent) ; Zemčík, Pavel (vedoucí práce)
Detection of objects in computer vision is a complex task. One of most popular and well explored  approaches is the use of statistical classifiers and scanning windows. In this approach, classifiers learned by AdaBoost algorithm (or some modification) are often used as they achieve low error rates, high detection rates and they are suitable for detection in real-time applications. Object detection run-time which uses such classifiers can be implemented by various methods and properties of underlying architecture can be used for speed-up of the detection.  For the purpose of acceleration, graphics hardware, multi-core architectures, SIMD or other means can be used. The detection is often implemented on programmable hardware.  The contribution of this thesis is to introduce an optimization technique which enhances object detection performance with respect to an user defined cost function. The optimization balances computations of previously learned classifiers between two or more run-time implementations in order to minimize the cost function.  The optimization method is verified on a basic example -- division of a classifier to a pre-processing unit implemented in FPGA, and a post-processing unit in standard PC.

Analysis and Testing of Concurrent Programs
Letko, Zdeněk ; Lourenco, Joao (oponent) ; Sekanina, Lukáš (oponent) ; Vojnar, Tomáš (vedoucí práce)
The thesis starts by providing a taxonomy of concurrency-related errors and an overview of their dynamic detection. Then, concurrency coverage metrics which measure how well the synchronisation and concurrency-related behaviour of tested programs has been examined are proposed together with a~methodology for deriving such metrics. The proposed metrics are especially suitable for saturation-based and search-based testing. Next, a novel coverage-based noise injection techniques that maximise the number of interleavings witnessed during testing are proposed. A comparison of various existing noise injection heuristics and the newly proposed heuristics on a set of benchmarks is provided, showing that the proposed techniques win over the existing ones in some cases. Finally, a novel use of stochastic optimisation algorithms in the area of concurrency testing is proposed in the form of their application for finding suitable combinations of values of the many parameters of tests and the noise injection techniques. The approach has been implemented in a prototype way and tested on a set of benchmark programs, showing its potential to significantly improve the testing process.

Relational Verification of Programs with Integer Data
Konečný, Filip ; Bouajjani, Ahmed (oponent) ; Jančar, Petr (oponent) ; Vojnar, Tomáš (vedoucí práce)
This work presents novel methods for verification of reachability and termination properties of programs that manipulate unbounded integer data. Most of these methods are based on acceleration techniques which compute transitive closures of program loops. We first present an algorithm that accelerates several classes of integer relations and show that the new method performs up to four orders of magnitude better than the previous ones. On the theoretical side, our framework provides a common solution to the acceleration problem by proving that the considered classes of relations are periodic. Subsequently, we introduce a semi-algorithmic reachability analysis technique that tracks relations between variables of integer programs and applies the proposed acceleration algorithm to compute summaries of procedures in a modular way. Next, we present an alternative approach to reachability analysis that integrates predicate abstraction with our acceleration techniques to increase the likelihood of convergence of the algorithm. We evaluate these algorithms and show that they can handle a number of complex integer programs where previous approaches failed. Finally, we study the termination problem for several classes of program loops and show that it is decidable. Moreover, for some of these classes, we design a polynomial time algorithm that computes the exact set of program configurations from which nonterminating runs exist. We further integrate this algorithm into a semi-algorithmic method that analyzes termination of integer programs, and show that the resulting technique can verify termination properties of several non-trivial integer programs.