Národní úložiště šedé literatury Nalezeno 7,832 záznamů.  1 - 10dalšíkonec  přejít na záznam: Hledání trvalo 0.29 vteřin. 

Terre a terre in classical dance technique
Lacová, Martina ; Janeček, Václav (vedoucí práce) ; Janeček, Václav (vedoucí práce) ; Křenková, Mahulena (oponent)
Táto práca je venovaná analýze terre à terre, špecifickému spôsobu prevádzania pohybov a skokov techniky klasického tanca. Rozbor tejto témy je pojímaný z niekoľkých uhlov. Prvá časť sa zaoberá základnou charakteristikou a vlastnosťami terre à terre štýlu tanca, jej vývojom a prítomnosťou v určitých dejinných etapách. Obsahom druhej časti bakalárskej práce je analýza jednotlivých častí tela, akými sú centrum ľudského tela, chodidlá, horné končatiny, a ich funkcie v terre à terre tanci. Pozornosť sa tiež sústreďuje na funkciu základného pohybu klasického tanca, plié, a na funkciu épaulements, ktoré sú nepostrádateľnou súčasťou umeleckej prezentácie tanca, v snahe zachytiť ich správanie v súvislosti s terre à terre technikou. Záverečnú časť práce predstavuje súhrn základných prvkov a skupiny prvkov techniky klasického tanca, ktoré je možné prevádzať spôsobom terre à terre, so základnou všeobecnou charakteristikou každého z nich.

Výběr a implementace open source nástroje pro řízení portfolia projektů
Marek, Jan ; Chlapek, Dušan (vedoucí práce) ; Kučera, Jan (oponent)
Metody a způsoby realizace změn a inovací v podnicích pomocí projektů jsou v dnešní společnosti již zažité. Jsou dobře známé metodiky, postupy i nástroje pro řízení jednotlivých projektů. Nicméně ve své praxi projektového manažera jsem se často setkával s tím, že společnosti často velmi intuitivním způsobem řeší řízení celkového portfolia projektů, což samo o sobě často vede k předčasně ukončeným projektům, řešení nesprávných projektů, případně řešení správných projektů, ale v nesprávný čas. Velmi často jsem se také setkal se situací, že chybí povědomí o tom, že existují i Open Source aplikace, které mohou s organizací portfolia pomoci. Tato práce se zabývá definováním požadavků, vyhledáváním a výběrem a následně i návrhem implementací takové OSS aplikace. V prvé části práce definuji teoretický rámec pro řízení portfolia a na jeho základě následně identifikuji a ověřím set požadavků pro výběr aplikace. Následná část řeší již samotné hledání vhodných aplikací z různých zdrojů, jejich vyhodnocení vůči požadavkům a následný výběr. Následně je pak připraven návrh implementačního projektu, který má za cíl sloužit dalším kolegům v oblasti projektového řízení jako jedna z možných implementačních cest. Výstupy jsou následně průběžně konfrontovány s experty na problematiku projektů v oboru IT tak, aby se podařilo co nejvíce skloubit teorii a následnou praxi. Prakticky ve všech metodikách projektového řízení je kladen značný důraz na přebírání již odzkoušených postupů, proto za hlavní přínos této práce pro praxi považuji fakt, že obsahuje nejen návrh implementačního projektu, ale zároveň i popisuje logickou cestu, jak jsem k němu došel. Práce je tak využitelná při řešení jakéhokoli projektu implementace PPM aplikace.

A Comparison of Preconditioning Methods for Saddle Point Problems with an Application to Porous Media Flow Problems
Axelsson, Owe ; Blaheta, Radim ; Hasal, Martin
The paper overviews and compares some block preconditioners for the solution of saddle point systems, especially systems arising from the Brinkman model of porous media flow. The considered preconditioners involve different Schur complements as inverse free Schur complement in HSS (Hermitian - Skew Hermitian Splitting preconditioner), Schur complement to the velocity matrix and finally Schur complement to a regularization block in the augmented matrix preconditioner. The inverses appearing in most of the considered Schur complements are approximated by simple sparse approximation techniques as element-by-element and Frobenius norm minimization approaches. A special interest is devoted to problems involving various Darcy, Stokes and Brinkman flow regions, the efficiency of preconditioners in this case is demonstrated by some numerical experiments.

New Methods for Increasing Efficiency and Speed of Functional Verification
Zachariášová, Marcela ; Dohnal, Jan (oponent) ; Steininger, Andreas (oponent) ; Kotásek, Zdeněk (vedoucí práce)
In the development of current hardware systems, e.g. embedded systems or computer hardware, new ways how to increase their reliability are highly investigated. One way how to tackle the issue of reliability is to increase the efficiency and the speed of verification processes that are performed in the early phases of the design cycle. In this Ph.D. thesis, the attention is focused on the verification approach called functional verification. Several challenges and problems connected with the efficiency and the speed of functional verification are identified and reflected in the goals of the Ph.D. thesis. The first goal focuses on the reduction of the simulation runtime when verifying complex hardware systems. The reason is that the simulation of inherently parallel hardware systems is very slow in comparison to the speed of real hardware. The optimization technique is proposed that moves the verified system into the FPGA acceleration board while the rest of the verification environment runs in simulation. By this single move, the simulation overhead can be significantly reduced. The second goal deals with manually written verification environments which represent a huge bottleneck in the verification productivity. However, it is not reasonable, because almost all verification environments have the same structure as they utilize libraries of basic components from the standard verification methodologies. They are only adjusted to the system that is verified. Therefore, the second optimization technique takes the high-level specification of the system and then automatically generates a comprehensive verification environment for this system. The third goal elaborates how the completeness of the verification process can be achieved using the intelligent automation. The completeness is measured by different coverage metrics and the verification is usually ended when a satisfying level of coverage is achieved. Therefore, the third optimization technique drives generation of input stimuli in order to activate multiple coverage points in the veri\-fied system and to enhance the overall coverage rate. As the main optimization tool the genetic algorithm is used, which is adopted for the functional verification purposes and its parameters are well-tuned for this domain. It is running in the background of the verification process, it analyses the coverage and it dynamically changes constraints of the stimuli generator. Constraints are represented by the probabilities using which particular values from the input domain are selected.       The fourth goal discusses the re-usability of verification stimuli for regression testing and how these stimuli can be further optimized in order to speed-up the testing. It is quite common in verification that until a satisfying level of coverage is achieved, many redundant stimuli are evaluated as they are produced by pseudo-random generators. However, when creating optimal regression suites, redundancy is not needed anymore and can be removed. At the same time, it is important to retain the same level of coverage in order to check all the key properties of the system. The fourth optimization technique is also based on the genetic algorithm, but it is not integrated into the verification process but works offline after the verification is ended. It removes the redundancy from the original suite of stimuli very fast and effectively so the resulting verification runtime of the regression suite is significantly improved.

STATISTICAL LANGUAGE MODELS BASED ON NEURAL NETWORKS
Mikolov, Tomáš ; Zweig, Geoffrey (oponent) ; Hajič,, Jan (oponent) ; Černocký, Jan (vedoucí práce)
Statistical language models are crucial part of many successful applications, such as automatic speech recognition and statistical machine translation (for example well-known Google Translate). Traditional techniques for estimating these models are based on Ngram counts. Despite known weaknesses of N-grams and huge efforts of research communities across many fields (speech recognition, machine translation, neuroscience, artificial intelligence, natural language processing, data compression, psychology etc.), N-grams remained basically the state-of-the-art. The goal of this thesis is to present various architectures of language models that are based on artificial neural networks. Although these models are computationally more expensive than N-gram models, with the presented techniques it is possible to apply them to state-of-the-art systems efficiently. Achieved reductions of word error rate of speech recognition systems are up to 20%, against stateof-the-art N-gram model. The presented recurrent neural network based model achieves the best published performance on well-known Penn Treebank setup.

Analysis and Testing of Concurrent Programs
Letko, Zdeněk ; Lourenco, Joao (oponent) ; Sekanina, Lukáš (oponent) ; Vojnar, Tomáš (vedoucí práce)
The thesis starts by providing a taxonomy of concurrency-related errors and an overview of their dynamic detection. Then, concurrency coverage metrics which measure how well the synchronisation and concurrency-related behaviour of tested programs has been examined are proposed together with a~methodology for deriving such metrics. The proposed metrics are especially suitable for saturation-based and search-based testing. Next, a novel coverage-based noise injection techniques that maximise the number of interleavings witnessed during testing are proposed. A comparison of various existing noise injection heuristics and the newly proposed heuristics on a set of benchmarks is provided, showing that the proposed techniques win over the existing ones in some cases. Finally, a novel use of stochastic optimisation algorithms in the area of concurrency testing is proposed in the form of their application for finding suitable combinations of values of the many parameters of tests and the noise injection techniques. The approach has been implemented in a prototype way and tested on a set of benchmark programs, showing its potential to significantly improve the testing process.

Acceleration Methods for Evolutionary Design of Digital Circuits
Vašíček, Zdeněk ; Miller, Julian (oponent) ; Zelinka,, Ivan (oponent) ; Sekanina, Lukáš (vedoucí práce)
Although many examples showing the merits of evolutionary design over conventional design techniques utilized in the field of digital circuits design have been published, the evolutionary approaches are usually hardly applicable in practice due to the various so-called scalability problems. The scalability problem represents a general problem that refers to a situation in which the evolutionary algorithm is able to provide a solution to a small problem instances only. For example, the scalability of evaluation of a candidate digital circuit represents a serious issue because the time needed to evaluate a candidate solution grows exponentially with the increasing number of primary inputs. In this thesis, the scalability problem of evaluation of a candidate digital circuit is addressed. Three different approaches to overcoming this problem are proposed. Our goal is to demonstrate that the evolutionary design approach can produce interesting and human competitive solutions when the problem of scalability is reduced and thus a sufficient number of generations can be utilized. In order to increase the performance of the evolutionary design of image filters, a domain specific FPGA-based accelerator has been designed. The evolutionary design of image filters is a kind of regression problem which requires to evaluate a large number of training vectors as well as generations in order to find a satisfactory solution. By means of the proposed FPGA accelerator, very efficient nonlinear image filters have been discovered. One of the discovered implementations of an impulse noise filter consisting of four evolutionary designed filters is protected by the Czech utility model. A different approach has been introduced in the area of logic synthesis. A method combining formal verification techniques with evolutionary design that allows a significant acceleration of the fitness evaluation procedure was proposed. The proposed system can produce complex and simultaneously innovative designs, overcoming thus the major bottleneck of the evolutionary synthesis at gate level. The proposed method has been evaluated using a set of benchmark circuits and compared with conventional academia as well as commercial synthesis tools. In comparison with the conventional synthesis tools, the average improvement in terms of the number of gates provided by our system is approximately 25%. Finally, the problem of the multiple constant multiplier design, which belongs to the class of problems where a candidate solution can be perfectly evaluated in a short time, has been investigated. We have demonstrated that there exists a class of circuits that can be evaluated efficiently if a domain knowledge is utilized (in this case the linearity of components).

Simulace a protiřetězce pro efektivní práci s konečnými automaty
Holík, Lukáš ; Černá, Ivana (oponent) ; Jančar, Petr (oponent) ; Vojnar, Tomáš (vedoucí práce)
This thesis is focused on techniques for finite automata and their use in practice, with the main emphasis on nondeterministic tree automata. This concerns namely techniques for size reduction and language inclusion testing, which are two problems that are crucial for many applications of tree automata. For size reduction of tree automata, we adapt the simulation quotient technique that is well established for finite word automata. We give efficient algorithms for computing tree automata simulations and we also introduce a new type of relation that arises from a combination of tree automata downward and upward simulation and that is very well suited for quotienting. The combination principle is relevant also for word automata. We then generalise the so called antichain universality and language inclusion checking technique developed originally for finite word automata for tree automata.  Subsequently, we improve the antichain technique for both word and tree automata by combining it with the simulation-based inclusion checking techniques, significantly improving efficiency of the antichain method. We then show how the developed reduction and inclusion checking methods improve the method of abstract regular tree model checking, the method that was the original motivation for starting the work on tree automata. Both the reduction and the language inclusion methods are based on relatively simple and general principles that can be further extended for other types of automata and related formalisms. An example is our adaptation of the reduction methods for alternating Büchi automata, which results in an efficient alternating automata size reduction technique.

Multimedia Data Processing in Heterogeneous Distributed Environment
Kajan, Rudolf ; Ferko,, Andrej (oponent) ; Míkovec, Zdeněk (oponent) ; Herout, Adam (vedoucí práce)
Ubiquitous computing, a paradigm in which the processing of information is linked with each activity or object as encountered, was proposed by Mark Weiser as the next era for interacting with computers. Its goal is to enable people to interact with devices more naturally and casually in ways that suit whatever location or context they find themselves in. Ubiquitous computing focuses on learning by removing the complexity of computing and increases efficiency while using computing for different daily activities. But after more than 15 years since Weiser formulated these goals, several aspects of ubiquitous computing are still not a part of user experience with today’s technology. Seamless integration with environment leading to technological invisibility or user interaction spanning across multiple devices pose still a great challenge. The main goal of our work is to make a step towards making the idea of ubiquitous computing a reality by addressing the question about intuitive information sharing between a personal device and a situated display. We have developed three interaction techniques which support unobtrusive content exchange between touch-enabled personal device and a large display - whether it is shared-private or public. These techniques are based on video streams, augmented reality, and analysis of gaze data. Besides the interaction techniques, we also present a framework for real-time application state acquisition and reconstruction on target platform. We report on user studies focused on the usability of our prototypes and a system performance evaluations. Our experiments were formed around real-life scenarios which are commonly experienced throughout the day. For interactions based on video streams, the results indicate that our techniques outperform the existing solutions: the localization and task migration is done in real time on a midlevel cellphone; the localization is reliable even for different observation angles and for cluttered screen content. Our technique based on gaze analysis goes even further by allowing for modeling of implicit user preferences through gaze data, while being accurate and unobtrusive.

Intrusion Detection in Network Traffic
Homoliak, Ivan ; Čeleda, Pavel (oponent) ; Ochoa,, Martín (oponent) ; Hanáček, Petr (vedoucí práce)
The thesis deals with anomaly based network intrusion detection which utilize machine learning approaches. First, state-of-the-art datasets intended for evaluation of intrusion detection systems are described as well as the related works employing statistical analysis and machine learning techniques for network intrusion detection. In the next part, original feature set, Advanced Security Network Metrics (ASNM) is presented, which is part of conceptual automated network intrusion detection system, AIPS. Then, tunneling obfuscation techniques as well as non-payload-based ones are proposed to apply as modifications of network attack execution. Experiments reveal that utilized obfuscations are able to avoid attack detection by supervised classifier using ASNM features, and their utilization can strengthen the detection performance of the classifier by including them into the training process of the classifier. The work also presents an alternative view on the non-payload-based obfuscation techniques, and demonstrates how they may be employed as a training data driven approximation of network traffic normalizer.