Národní úložiště šedé literatury Nalezeno 6,572 záznamů.  předchozí11 - 20dalšíkonec  přejít na záznam: Hledání trvalo 0.34 vteřin. 

Optovláknové senzory a jejich využití pro měření v jaderných elektrárnách
Mikel, Břetislav
V článku prezentujeme naši práci v oblasti návrhu a měření s optickými vláknovými senzory, které využívají vláknové Braggovy mřížky. V současnosti vyvíjíme senzory pro měření teploty, tlaku a protažení. Jeden ze systémů, které jsme spolu vyvíjeli ve spolupráci se společností Network group, byl po dobu jednoho roku úspěšně testován v jaderné elektrárně Temelín, při měření roztažnosti kontejnmentu.\n\n

New Methods for Increasing Efficiency and Speed of Functional Verification
Zachariášová, Marcela ; Dohnal, Jan (oponent) ; Steininger, Andreas (oponent) ; Kotásek, Zdeněk (vedoucí práce)
In the development of current hardware systems, e.g. embedded systems or computer hardware, new ways how to increase their reliability are highly investigated. One way how to tackle the issue of reliability is to increase the efficiency and the speed of verification processes that are performed in the early phases of the design cycle. In this Ph.D. thesis, the attention is focused on the verification approach called functional verification. Several challenges and problems connected with the efficiency and the speed of functional verification are identified and reflected in the goals of the Ph.D. thesis. The first goal focuses on the reduction of the simulation runtime when verifying complex hardware systems. The reason is that the simulation of inherently parallel hardware systems is very slow in comparison to the speed of real hardware. The optimization technique is proposed that moves the verified system into the FPGA acceleration board while the rest of the verification environment runs in simulation. By this single move, the simulation overhead can be significantly reduced. The second goal deals with manually written verification environments which represent a huge bottleneck in the verification productivity. However, it is not reasonable, because almost all verification environments have the same structure as they utilize libraries of basic components from the standard verification methodologies. They are only adjusted to the system that is verified. Therefore, the second optimization technique takes the high-level specification of the system and then automatically generates a comprehensive verification environment for this system. The third goal elaborates how the completeness of the verification process can be achieved using the intelligent automation. The completeness is measured by different coverage metrics and the verification is usually ended when a satisfying level of coverage is achieved. Therefore, the third optimization technique drives generation of input stimuli in order to activate multiple coverage points in the veri\-fied system and to enhance the overall coverage rate. As the main optimization tool the genetic algorithm is used, which is adopted for the functional verification purposes and its parameters are well-tuned for this domain. It is running in the background of the verification process, it analyses the coverage and it dynamically changes constraints of the stimuli generator. Constraints are represented by the probabilities using which particular values from the input domain are selected.       The fourth goal discusses the re-usability of verification stimuli for regression testing and how these stimuli can be further optimized in order to speed-up the testing. It is quite common in verification that until a satisfying level of coverage is achieved, many redundant stimuli are evaluated as they are produced by pseudo-random generators. However, when creating optimal regression suites, redundancy is not needed anymore and can be removed. At the same time, it is important to retain the same level of coverage in order to check all the key properties of the system. The fourth optimization technique is also based on the genetic algorithm, but it is not integrated into the verification process but works offline after the verification is ended. It removes the redundancy from the original suite of stimuli very fast and effectively so the resulting verification runtime of the regression suite is significantly improved.

Harnessing Forest Automata for Verification of Heap Manipulating Programs
Šimáček, Jiří ; Abdulla, Parosh (oponent) ; Křetínský, Mojmír (oponent) ; Vojnar, Tomáš (vedoucí práce)
This work addresses verification of infinite-state systems, more specifically, verification of programs manipulating complex dynamic linked data structures. Many different approaches emerged to date, but none of them provides a~sufficiently robust solution which would succeed in all possible scenarios appearing in practice. Therefore, in this work, we propose a new approach which aims at improving the current state of the art in several dimensions. Our approach is based on using tree automata, but it is also partially inspired by some ideas taken from the methods based on separation logic. Apart from that, we also present multiple advancements within the implementation of various tree automata operations, crucial for our verification method to succeed in practice. Namely, we provide an optimised algorithm for computing simulations over labelled transition systems which then translates into more efficient computation of simulations over tree automata. We also give a new algorithm for checking inclusion over tree automata, and we provide experimental evaluation demonstrating

Formal Systems Based on Automata and Grammars
Čermák, Martin ; Rybička, Jiří (oponent) ; Šaloun, Petr (oponent) ; Meduna, Alexandr (vedoucí práce)
The present thesis continues with study of grammar and automata systems. First of all, it deals with regularly controlled CD grammar systems with phrase-structure grammars as components. Into these systems, three new derivation restrictions are placed and their effect on the generative power of these systems are investigated. Thereafter, this thesis defines two automata counterparts of canonical multi-generative nonterminal and rule synchronized grammar systems, generating vectors of strings, and it shows that these investigated systems are equivalent. Furthermore, this thesis generalizes definitions of these systems and establishes fundamental hierarchy of n-languages (sets of n-tuples of strings). In relation with these mentioned systems, automaton-grammar translating systems based upon finite automaton and context-free grammar are introduced and investigated as a mechanism for direct translating. At the end, in this thesis introduced automata systems are used as the core of parse-method based upon n-path-restricted tree-controlled grammars.

Acceleration Methods for Evolutionary Design of Digital Circuits
Vašíček, Zdeněk ; Miller, Julian (oponent) ; Zelinka,, Ivan (oponent) ; Sekanina, Lukáš (vedoucí práce)
Although many examples showing the merits of evolutionary design over conventional design techniques utilized in the field of digital circuits design have been published, the evolutionary approaches are usually hardly applicable in practice due to the various so-called scalability problems. The scalability problem represents a general problem that refers to a situation in which the evolutionary algorithm is able to provide a solution to a small problem instances only. For example, the scalability of evaluation of a candidate digital circuit represents a serious issue because the time needed to evaluate a candidate solution grows exponentially with the increasing number of primary inputs. In this thesis, the scalability problem of evaluation of a candidate digital circuit is addressed. Three different approaches to overcoming this problem are proposed. Our goal is to demonstrate that the evolutionary design approach can produce interesting and human competitive solutions when the problem of scalability is reduced and thus a sufficient number of generations can be utilized. In order to increase the performance of the evolutionary design of image filters, a domain specific FPGA-based accelerator has been designed. The evolutionary design of image filters is a kind of regression problem which requires to evaluate a large number of training vectors as well as generations in order to find a satisfactory solution. By means of the proposed FPGA accelerator, very efficient nonlinear image filters have been discovered. One of the discovered implementations of an impulse noise filter consisting of four evolutionary designed filters is protected by the Czech utility model. A different approach has been introduced in the area of logic synthesis. A method combining formal verification techniques with evolutionary design that allows a significant acceleration of the fitness evaluation procedure was proposed. The proposed system can produce complex and simultaneously innovative designs, overcoming thus the major bottleneck of the evolutionary synthesis at gate level. The proposed method has been evaluated using a set of benchmark circuits and compared with conventional academia as well as commercial synthesis tools. In comparison with the conventional synthesis tools, the average improvement in terms of the number of gates provided by our system is approximately 25%. Finally, the problem of the multiple constant multiplier design, which belongs to the class of problems where a candidate solution can be perfectly evaluated in a short time, has been investigated. We have demonstrated that there exists a class of circuits that can be evaluated efficiently if a domain knowledge is utilized (in this case the linearity of components).

On-line Data Analysis Based on Visual Codebooks
Beran, Vítězslav ; Honec, Jozef (oponent) ; Sojka, Eduard (oponent) ; Zemčík, Pavel (vedoucí práce)
This work introduces the new adaptable method for on-line video searching in real-time based on visual codebook. The new method addresses the high computational efficiency and retrieval performance when used on on-line data. The method originates in procedures utilized by static visual codebook techniques. These standard procedures are modified to be able to adapt to changing data. The procedures, that improve the new method adaptability, are dynamic inverse document frequency, adaptable visual codebook and flowing inverted index. The developed adaptable method was evaluated and the presented results show how the adaptable method outperforms the static approaches when evaluating on the video searching tasks. The new adaptable method is based on introduced flowing window concept that defines the ways of selection of data, both for system adaptation and for processing. Together with the concept, the mathematical background is defined to find the best configuration when applying the concept to some new method. The practical application of the adaptable method is particularly in the video processing systems where significant changes of the data domain, unknown in advance, is expected. The method is applicable in embedded systems monitoring and analyzing the broadcasted TV on-line signals in real-time.

HUMAN ACTION RECOGNITION IN VIDEO
Řezníček, Ivo ; Baláž, Teodor (oponent) ; Sojka, Eduard (oponent) ; Zemčík, Pavel (vedoucí práce)
This thesis focuses on the improvement of human action recognition systems. It reviews the state-of-the-art in the field of action recognition from video. It describes techniques of digital image and video capture, and explains computer representations of image and video. This thesis further describes how local feature vectors and local space-time feature vectors are used, and how captured data is prepared for further analysis, such as classification methods. This is typically done with video segments of arbitrarily varying length. The key contribution of this work explores the hypothesis that the analysis of different types of actions requires different segment lenghts to achieve optimal quality of recognition. An algorithm to find these optimal lengths is proposed, implemented, and tested. Using this algorithm, the hypothesis was experimentally proven. It was also shown that by finding the optimal length, the prediction and classification power of current algorithms is improved upon. Supporting experiments, results, and proposed exploitations of these findings are presented.

Intrusion Detection in Network Traffic
Homoliak, Ivan ; Čeleda, Pavel (oponent) ; Ochoa,, Martín (oponent) ; Hanáček, Petr (vedoucí práce)
The thesis deals with anomaly based network intrusion detection which utilize machine learning approaches. First, state-of-the-art datasets intended for evaluation of intrusion detection systems are described as well as the related works employing statistical analysis and machine learning techniques for network intrusion detection. In the next part, original feature set, Advanced Security Network Metrics (ASNM) is presented, which is part of conceptual automated network intrusion detection system, AIPS. Then, tunneling obfuscation techniques as well as non-payload-based ones are proposed to apply as modifications of network attack execution. Experiments reveal that utilized obfuscations are able to avoid attack detection by supervised classifier using ASNM features, and their utilization can strengthen the detection performance of the classifier by including them into the training process of the classifier. The work also presents an alternative view on the non-payload-based obfuscation techniques, and demonstrates how they may be employed as a training data driven approximation of network traffic normalizer.

OPTIMIZATION OF ALGORITHMS AND DATA STRUCTURES FOR REGULAR EXPRESSION MATCHING USING FPGA TECHNOLOGY
Kaštil, Jan ; Plíva, Zdeněk (oponent) ; Vlček, Karel (oponent) ; Kotásek, Zdeněk (vedoucí práce)
This thesis deals with fast regular expression matching using FPGA. Regular expression matching in high speed computer networks is computationally intensive operation used mostly in the field of the computer network security and in the field of monitoring of the network traffic. Current solutions do not achieve throughput required by modern networks with respect to all requirements placed on the matching unit. Innovative hardware architectures implemented in FPGA or ASIC have the highest throughput. This thesis describes two new architectures suitable for the FPGA and ASIC implementation. The basic idea of these architectures is to use perfect hash function to implement transitional function of deterministic finite automaton. Also, architecture that allows the user to introduce small probability of errors into the matching process in order to reduce memory requirement of the matching unit was introduced. The thesis contains analysis of the effect of these errors to overall reliability of the system and compares it to the reliability of currently used approach. The measurement of properties of regular expressions used in analysis of the traffic in modern computer networks was performed in the thesis. The analysis implies that most of the used regular expressions are suitable for the implementation by proposed architectures. To guarantee high throughput of the matching unit new algorithms for alphabet transformation is proposed. The algorithm allows to transform the automaton to accept several input characters per one transition. The main advantage of the proposed algorithm over currently used solutions is that it does not have any limitation over the number of characters that are accepted at once. Implemented architectures were compared with the current state of the art algorithm and 200MB memory reduction was achieve

Paralelní výpočetní architektury založené na numerické integraci
Kraus, Michal ; Kubátová, Hana (oponent) ; Kollár,, Ján (oponent) ; Kunovský, Jiří (vedoucí práce)
Předkládaná práce se zabývá simulací spojitých systémů popsaných soustavou diferenciálních rovnic nebo blokového diagramu. Zcela běžné je numerické řešení diferenciálních rovnic a používání simulačních programových celků (Matlab, Maple, TKSL). Pro řešení diferenciálních rovnic je použita metoda Taylorovy řady. Bylo dokázáno, že metoda dosahuje velké přesnosti a rychlosti a nabízí možnost paralelního provádění a tím další urychlení výpočtu. Hlavní část práce obsahuje popis návrhu a realizace specializovaného paralelního systému provádějící výpočet numerické integrace v~několika variantách a jejich porovnání.