Národní úložiště šedé literatury Nalezeno 34,134 záznamů.  začátekpředchozí21 - 30dalšíkonec  přejít na záznam: Hledání trvalo 1.05 vteřin. 

Metody automatizované transformace modelů v analýze IS
Tůma, Jakub ; Merunka, Vojtěch (vedoucí práce) ; Toman, Prokop (oponent)
Tato doktorská disertační práce přispívá k holistickému vývoji informačních systémů v oblasti analytických modelů informačních systémů (IS). Práce se zabývá modely v analýze informačních systémů a jejich metody transformace. Práce je zaměřena na konkrétní modely a to na Business process modeling notation (BPMN) a Business object relational modeling (BORM). Model BPMN je rozvíjen od roku 2000. Model BORM je starší a je rozvíjen od roku 1993. Obecným cílem této práce bylo rozšíření holistického vývoje informačních systémů. Konkrétním cílem bylo propojení modelu BPMN a modelu BORM. Práce byla inspirována teorií konečných automatů. Stav problematiky řešení popisuje přístupy k transformacím modelů. V analytické části jsou matematickým zápisem popsány jednotlivé transformace, jakožto vstupní údaje pro realizační část. Realizační část obsahuje algoritmus transformace, postup jeho dosažení a jeho následné ověření na případových studiích. Diskuze obsahuje porovnání přístupu vytvořené metody transformace s ostatními přístupy. Dosažení cílů je dokumentováno automatizovaným transformačním kalkulem. Příno- sem je automatizované propojení modelů BPMN a modelu BORM pomocí metody transformace. Výsledkem je metoda automatizované transformace z modelu BPMN do modelu BORM pomocí algoritmu. Transformace modelu BPMN do modelu BORM je uskutečněna přes Mealyho automat.

Komunitní péče v psychiatrii
BÍNOVÁ, Romana
Abstrakt Diplomová práce se zabývá problematikou komunitní péče v psychiatrii a snaží se zachytit, jaký význam při jejím poskytování má sestra. Komunitní psychiatrická péče je velmi širokou oblastí intermediální pomoci pacientovy, která se dotýká v podstatě jakékoliv oblasti jeho života. Přestože v České republice nedosáhla patřičného rozvoje, její přínos pro nemocné je již nyní důležitý a na její význam neustále roste. V teoretické části, po krátkém uvedení do problematiky komunitní péče je rozebrána její historie, principy, ale také její propojení s ošetřovatelskou péčí. Následně je věnována pozornost jednotlivým oblastem komunitní péče, které mají význam pro duševně nemocné a je popsána úloha sestry v této oblasti. Následuje zmínka o přístupu k duševně nemocným, problematice stigmatizace a organizace psychiatrické péče. V neposlední řadě jsou v teoretické části práce rozebrány psychiatrická onemocnění, která se mohou v komunitní péči vyskytnout a je uvedeno, jakým způsobem může být komunitní péče u těchto onemocnění přínosem.V praktické části bylo cílem zjistit, jaké je povědomí psychiatrických sester o komunitní péči a jaký význam přikládají komunitní péči v psychiatrii. Snahou bylo i zmapovat oblasti komunitní péče, ve kterých mohou pracovat sestry. V rámci kvantitativního šetření byly statisticky zpracovány odpovědi sester na zkoumané hypotézy. Hypotézy zjišťovaly, zda sestry s délkou praxe delší než deset let se častěji domnívají, že psychiatrické péče je pro pacienty přínosnější než hospitalizace. Dále, zda mají sestry s vyšším než středoškolským vzděláním lepší povědomí o poskytování komunitních služeb. Hypotézy také zjišťovaly, zda sestry považují za nejčastější formu komunitní péče v České republice služby zaměřené na podporu bydlení a jestli si sestry starší třiceti let více uvědomují důležitost postavení sester v komunitní péči. Žádnou z těchto hypotéz se nepodařilo prokázat. Hypotéza, která předpokládala, že sestry shánějí informace o komunitní péči více z internetu a literatury, než ze seminářů se potvrdila. Šetření se zúčastnily psychiatrické sestry pracující v psychiatrických léčebnách v Jihomoravském kraji a kraji Vysočina. Součástí praktické části je i analýza komunitních služeb v daných krajích. V obou krajích bylo dohromady zmapováno 13 občanských sdružení a komunitních center.

New Methods for Increasing Efficiency and Speed of Functional Verification
Zachariášová, Marcela ; Dohnal, Jan (oponent) ; Steininger, Andreas (oponent) ; Kotásek, Zdeněk (vedoucí práce)
In the development of current hardware systems, e.g. embedded systems or computer hardware, new ways how to increase their reliability are highly investigated. One way how to tackle the issue of reliability is to increase the efficiency and the speed of verification processes that are performed in the early phases of the design cycle. In this Ph.D. thesis, the attention is focused on the verification approach called functional verification. Several challenges and problems connected with the efficiency and the speed of functional verification are identified and reflected in the goals of the Ph.D. thesis. The first goal focuses on the reduction of the simulation runtime when verifying complex hardware systems. The reason is that the simulation of inherently parallel hardware systems is very slow in comparison to the speed of real hardware. The optimization technique is proposed that moves the verified system into the FPGA acceleration board while the rest of the verification environment runs in simulation. By this single move, the simulation overhead can be significantly reduced. The second goal deals with manually written verification environments which represent a huge bottleneck in the verification productivity. However, it is not reasonable, because almost all verification environments have the same structure as they utilize libraries of basic components from the standard verification methodologies. They are only adjusted to the system that is verified. Therefore, the second optimization technique takes the high-level specification of the system and then automatically generates a comprehensive verification environment for this system. The third goal elaborates how the completeness of the verification process can be achieved using the intelligent automation. The completeness is measured by different coverage metrics and the verification is usually ended when a satisfying level of coverage is achieved. Therefore, the third optimization technique drives generation of input stimuli in order to activate multiple coverage points in the veri\-fied system and to enhance the overall coverage rate. As the main optimization tool the genetic algorithm is used, which is adopted for the functional verification purposes and its parameters are well-tuned for this domain. It is running in the background of the verification process, it analyses the coverage and it dynamically changes constraints of the stimuli generator. Constraints are represented by the probabilities using which particular values from the input domain are selected.       The fourth goal discusses the re-usability of verification stimuli for regression testing and how these stimuli can be further optimized in order to speed-up the testing. It is quite common in verification that until a satisfying level of coverage is achieved, many redundant stimuli are evaluated as they are produced by pseudo-random generators. However, when creating optimal regression suites, redundancy is not needed anymore and can be removed. At the same time, it is important to retain the same level of coverage in order to check all the key properties of the system. The fourth optimization technique is also based on the genetic algorithm, but it is not integrated into the verification process but works offline after the verification is ended. It removes the redundancy from the original suite of stimuli very fast and effectively so the resulting verification runtime of the regression suite is significantly improved.

Packet Classification Algorithms
Puš, Viktor ; Lhotka,, Ladislav (oponent) ; Dvořák, Václav (vedoucí práce)
This thesis deals with packet classification in computer networks. Classification is the key task in many networking devices, most notably packet filters - firewalls. This thesis therefore concerns the area of computer security. The thesis is focused on high-speed networks with the bandwidth of 100 Gb/s and beyond. General-purpose processors can not be used in such cases, because their performance is not sufficient. Therefore, specialized hardware is used, mainly ASICs and FPGAs. Many packet classification algorithms designed for hardware implementation were presented, yet these approaches are not ready for very high-speed networks. This thesis addresses the design of new high-speed packet classification algorithms, targeted for the implementation in dedicated hardware. The algorithm that decomposes the problem into several easier sub-problems is proposed. The first subproblem is the longest prefix match (LPM) operation, which is used also in IP packet routing. As the LPM algorithms with sufficient speed have already been published, they can be used in out context. The following subproblem is mapping the prefixes to the rule numbers. This is where the thesis brings innovation by using a specifically constructed hash function. This hash function allows the mapping to be done in constant time and requires only one memory with narrow data bus. The algorithm throughput can be determined analytically and is independent on the number of rules or the network traffic characteristics. With the use of available parts the throughput of 266 million packets per second can be achieved. Additional three algorithms (PFCA, PCCA, MSPCCA) that follow in this thesis are designed to lower the memory requirements of the first one without compromising the speed. The second algorithm lowers the memory size by 11 % to 96 %, depending on the rule set. The disadvantage of low stability is removed by the third algorithm, which reduces the memory requirements by 31 % to 84 %, compared to the first one. The fourth algorithm combines the third one with the older approach and thanks to the use of several techniques lowers the memory requirements by 73 % to 99 %.

Acceleration of Object Detection Using Classifiers
Juránek, Roman ; Kälviäinen, Heikki (oponent) ; Sojka, Eduard (oponent) ; Zemčík, Pavel (vedoucí práce)
Detection of objects in computer vision is a complex task. One of most popular and well explored  approaches is the use of statistical classifiers and scanning windows. In this approach, classifiers learned by AdaBoost algorithm (or some modification) are often used as they achieve low error rates, high detection rates and they are suitable for detection in real-time applications. Object detection run-time which uses such classifiers can be implemented by various methods and properties of underlying architecture can be used for speed-up of the detection.  For the purpose of acceleration, graphics hardware, multi-core architectures, SIMD or other means can be used. The detection is often implemented on programmable hardware.  The contribution of this thesis is to introduce an optimization technique which enhances object detection performance with respect to an user defined cost function. The optimization balances computations of previously learned classifiers between two or more run-time implementations in order to minimize the cost function.  The optimization method is verified on a basic example -- division of a classifier to a pre-processing unit implemented in FPGA, and a post-processing unit in standard PC.

Relational Verification of Programs with Integer Data
Konečný, Filip ; Bouajjani, Ahmed (oponent) ; Jančar, Petr (oponent) ; Vojnar, Tomáš (vedoucí práce)
This work presents novel methods for verification of reachability and termination properties of programs that manipulate unbounded integer data. Most of these methods are based on acceleration techniques which compute transitive closures of program loops. We first present an algorithm that accelerates several classes of integer relations and show that the new method performs up to four orders of magnitude better than the previous ones. On the theoretical side, our framework provides a common solution to the acceleration problem by proving that the considered classes of relations are periodic. Subsequently, we introduce a semi-algorithmic reachability analysis technique that tracks relations between variables of integer programs and applies the proposed acceleration algorithm to compute summaries of procedures in a modular way. Next, we present an alternative approach to reachability analysis that integrates predicate abstraction with our acceleration techniques to increase the likelihood of convergence of the algorithm. We evaluate these algorithms and show that they can handle a number of complex integer programs where previous approaches failed. Finally, we study the termination problem for several classes of program loops and show that it is decidable. Moreover, for some of these classes, we design a polynomial time algorithm that computes the exact set of program configurations from which nonterminating runs exist. We further integrate this algorithm into a semi-algorithmic method that analyzes termination of integer programs, and show that the resulting technique can verify termination properties of several non-trivial integer programs.

Acceleration Methods for Evolutionary Design of Digital Circuits
Vašíček, Zdeněk ; Miller, Julian (oponent) ; Zelinka,, Ivan (oponent) ; Sekanina, Lukáš (vedoucí práce)
Although many examples showing the merits of evolutionary design over conventional design techniques utilized in the field of digital circuits design have been published, the evolutionary approaches are usually hardly applicable in practice due to the various so-called scalability problems. The scalability problem represents a general problem that refers to a situation in which the evolutionary algorithm is able to provide a solution to a small problem instances only. For example, the scalability of evaluation of a candidate digital circuit represents a serious issue because the time needed to evaluate a candidate solution grows exponentially with the increasing number of primary inputs. In this thesis, the scalability problem of evaluation of a candidate digital circuit is addressed. Three different approaches to overcoming this problem are proposed. Our goal is to demonstrate that the evolutionary design approach can produce interesting and human competitive solutions when the problem of scalability is reduced and thus a sufficient number of generations can be utilized. In order to increase the performance of the evolutionary design of image filters, a domain specific FPGA-based accelerator has been designed. The evolutionary design of image filters is a kind of regression problem which requires to evaluate a large number of training vectors as well as generations in order to find a satisfactory solution. By means of the proposed FPGA accelerator, very efficient nonlinear image filters have been discovered. One of the discovered implementations of an impulse noise filter consisting of four evolutionary designed filters is protected by the Czech utility model. A different approach has been introduced in the area of logic synthesis. A method combining formal verification techniques with evolutionary design that allows a significant acceleration of the fitness evaluation procedure was proposed. The proposed system can produce complex and simultaneously innovative designs, overcoming thus the major bottleneck of the evolutionary synthesis at gate level. The proposed method has been evaluated using a set of benchmark circuits and compared with conventional academia as well as commercial synthesis tools. In comparison with the conventional synthesis tools, the average improvement in terms of the number of gates provided by our system is approximately 25%. Finally, the problem of the multiple constant multiplier design, which belongs to the class of problems where a candidate solution can be perfectly evaluated in a short time, has been investigated. We have demonstrated that there exists a class of circuits that can be evaluated efficiently if a domain knowledge is utilized (in this case the linearity of components).

Evolutionary Approach to Synthesis and Optimization of Ordinary and Polymorphic Circuits
Gajda, Zbyšek ; Schmidt, Jan (oponent) ; Zelinka,, Ivan (oponent) ; Sekanina, Lukáš (vedoucí práce)
This thesis deals with the evolutionary design and optimization of ordinary and polymorphic circuits. New extensions of Cartesian Genetic Programming (CGP) that allow reducing of the computational time and obtaining more compact circuits are proposed and evaluated. Second part of the thesis is focused on new methods for synthesis of polymorphic circuits. Proposed methods, based on polymorphic binary decision diagrams and polymorphic multiplexing, extend the ordinary circuit representations with the aim of including polymorphic gates. In order to reduce the number of gates in circuits synthesized using proposed methods, an evolutionary optimization based on CGP is implemented and evaluated. The implementations of polymorphic circuits optimized by CGP represent the best known solutions if the number of gates is considered as the target criterion.

Stability and convergence of numerical computations
Sehnalová, Pavla ; Dalík, Josef (oponent) ; Horová, Ivana (oponent) ; Kunovský, Jiří (vedoucí práce)
The aim of this thesis is to analyze the stability and convergence of fundamental numerical methods for solving ordinary differential equations. These include one-step methods such as the classical Euler method, Runge-Kutta methods and the less well known but fast and accurate Taylor series method. We also consider the generalization to multistep methods such as Adams methods and their implementation as predictor-corrector pairs. Furthermore we consider the generalization to multiderivative methods such as Obreshkov method. There is always a choice in predictor-corrector pairs of the so-called mode of the method and in this thesis both PEC and PECE modes are considered. The main goal and the new contribution of the thesis is the use of a special fourth order method consisting of a two-step predictor followed by an one-step corrector, each using second derivative formulae. The mathematical background of historical developments of Nordsieck representation, the algorithm of choosing a variable stepsize or an error estimation are discussed. The current approach adapts well to the multiderivative situation in variable stepsize formulations. Experiments for linear and non-linear problems and the comparison with classical methods are presented.

On-line Data Analysis Based on Visual Codebooks
Beran, Vítězslav ; Honec, Jozef (oponent) ; Sojka, Eduard (oponent) ; Zemčík, Pavel (vedoucí práce)
This work introduces the new adaptable method for on-line video searching in real-time based on visual codebook. The new method addresses the high computational efficiency and retrieval performance when used on on-line data. The method originates in procedures utilized by static visual codebook techniques. These standard procedures are modified to be able to adapt to changing data. The procedures, that improve the new method adaptability, are dynamic inverse document frequency, adaptable visual codebook and flowing inverted index. The developed adaptable method was evaluated and the presented results show how the adaptable method outperforms the static approaches when evaluating on the video searching tasks. The new adaptable method is based on introduced flowing window concept that defines the ways of selection of data, both for system adaptation and for processing. Together with the concept, the mathematical background is defined to find the best configuration when applying the concept to some new method. The practical application of the adaptable method is particularly in the video processing systems where significant changes of the data domain, unknown in advance, is expected. The method is applicable in embedded systems monitoring and analyzing the broadcasted TV on-line signals in real-time.