Národní úložiště šedé literatury Nalezeno 6,269 záznamů.  1 - 10dalšíkonec  přejít na záznam: Hledání trvalo 0.52 vteřin. 

Optimization techniques in inventory management
Němečková, Zita
Tato bakalářská práce mapuje problematiku řízení zásob se zaměřením na optimalizační techniky a jejich využití v této oblasti. Na základě dat získaných od skutečné firmy práce popisuje praktické využití optimalizačního algoritmu. Obsahuje zhodnocení výsledků získaných z aplikace, která je implementována jako součást bakalářské práce.

Vliv sněhové pokrývky na odtok během dešťových srážek.
Juras, Roman ; Máca, Petr (vedoucí práce) ; Ladislav , Ladislav (oponent)
V zimním období, kdy leží na povodí sněhová pokrývka, stále přibývá výskytu dešťových srážek. Déšť dopadající na sníh (ROS) má často za následek vznik povodní a mokrých lavin. Predikce vlivu ROS záleží především na lepším pochopení mechanismů vzniku a složení odtoku ze sněhové pokrývky. Spojení simulace deště na sněhovou pokrývku a využití stopovačů bylo testováno jako vhodný nástroj pro tento účel. Celkem bylo provedeno 18 experimentů na sněhovou pokrývku s různými počátečními vlastnostmi v horských podmínkách střední a západní Evropy. Pro určení charakteru proudění bylo použito barvivo brilliant blue (FCF), pomocí kterého je možné vizualizovat preferenční cesty, ale i určit rozhraní dvou vrstev o různých hydraulických vlastnostech. Zastoupení jednotlivých složek odtékající vody na výtoku bylo stanoveno pomocí metody separace hydrogramu, která poskytuje dobré výsledky s přijatelnou nejistotou. Z technických důvodů nebylo možné obě metody použít současně během jednoho experimentu, i když by to ještě více rozšířilo znalosti o dynamice proudění dešťové vody ve sněhové pokrývce. Množství tavné vody bylo vypočteno pomocí rovnice energetické bilance. Použití této rovnice je poměrně přesné, ale zároveň náročné na vstupy. Z toho důvodu bylo tání vypočteno pouze u jednoho experimentu. Rychlost vzniku odtoku roste v první řadě intenzitou srážky. Počáteční vlastnosti sněhové pokrývky, jako hustota a vlhkost, ovlivňují rychlost vzniku odtoku až druhotně. Na druhou stranu při stejné intenzitě srážky vykazovala nevyzrálá sněhová pokrývka s malou hustotou rychlejší hydrologickou odpověď, než vyzrálá pokrývka s větší hustotou. Velikost odtoku je závislá, především na počátečním nasycení. Vyzrálá sněhová pokrývka s vyšším počátečním nasycení generovala vyšší celkový odtok, kde dešťová voda přispívala maximálně z 50ti %. Proti tomu protekla dešťová voda nevyzrálou sněhovou pokrývkou poměrně rychle a do odtoku se propagovala přibližně z 80ti %. Pro predikci odtoku během ROS byla použita Richardsova rovnice v rámci modelu SNOWPACK. Tento model byl upraven tak, že byla sněhová matrice rozdělena pro lepší simulaci preferenčního proudění. Tento přístup přinesl zlepšení výsledků oproti klasickému přístupu, kdy se uvažuje pouze matricové proudění.

Optické vláknové senzory a svařování optických vláken
Jelínek, Michal ; Mikel, Břetislav
Vyvinuli jsme nové metody a techniky pro svařování a tvarování jednovidových (SM) a mnohavidových (MM) optických vláken a vláken s rozdílnými průměry. Společně s touto technikou jsme připravili techniku pro svařování mikrostrukturních vláken s SM vlákny. Tyto svařovací techniky optických vláken jsme vyvíjeli s ohledem na výzkum a vývoj v oblasti senzorové techniky.

New Methods for Increasing Efficiency and Speed of Functional Verification
Zachariášová, Marcela ; Dohnal, Jan (oponent) ; Steininger, Andreas (oponent) ; Kotásek, Zdeněk (vedoucí práce)
In the development of current hardware systems, e.g. embedded systems or computer hardware, new ways how to increase their reliability are highly investigated. One way how to tackle the issue of reliability is to increase the efficiency and the speed of verification processes that are performed in the early phases of the design cycle. In this Ph.D. thesis, the attention is focused on the verification approach called functional verification. Several challenges and problems connected with the efficiency and the speed of functional verification are identified and reflected in the goals of the Ph.D. thesis. The first goal focuses on the reduction of the simulation runtime when verifying complex hardware systems. The reason is that the simulation of inherently parallel hardware systems is very slow in comparison to the speed of real hardware. The optimization technique is proposed that moves the verified system into the FPGA acceleration board while the rest of the verification environment runs in simulation. By this single move, the simulation overhead can be significantly reduced. The second goal deals with manually written verification environments which represent a huge bottleneck in the verification productivity. However, it is not reasonable, because almost all verification environments have the same structure as they utilize libraries of basic components from the standard verification methodologies. They are only adjusted to the system that is verified. Therefore, the second optimization technique takes the high-level specification of the system and then automatically generates a comprehensive verification environment for this system. The third goal elaborates how the completeness of the verification process can be achieved using the intelligent automation. The completeness is measured by different coverage metrics and the verification is usually ended when a satisfying level of coverage is achieved. Therefore, the third optimization technique drives generation of input stimuli in order to activate multiple coverage points in the veri\-fied system and to enhance the overall coverage rate. As the main optimization tool the genetic algorithm is used, which is adopted for the functional verification purposes and its parameters are well-tuned for this domain. It is running in the background of the verification process, it analyses the coverage and it dynamically changes constraints of the stimuli generator. Constraints are represented by the probabilities using which particular values from the input domain are selected.       The fourth goal discusses the re-usability of verification stimuli for regression testing and how these stimuli can be further optimized in order to speed-up the testing. It is quite common in verification that until a satisfying level of coverage is achieved, many redundant stimuli are evaluated as they are produced by pseudo-random generators. However, when creating optimal regression suites, redundancy is not needed anymore and can be removed. At the same time, it is important to retain the same level of coverage in order to check all the key properties of the system. The fourth optimization technique is also based on the genetic algorithm, but it is not integrated into the verification process but works offline after the verification is ended. It removes the redundancy from the original suite of stimuli very fast and effectively so the resulting verification runtime of the regression suite is significantly improved.

Hybrid 3D Face Recognition
Mráček, Štěpán ; Bours, Patrick (oponent) ; Bronstein, Michael (oponent) ; Drahanský, Martin (vedoucí práce)
This Ph.D. thesis deals with the biometric recognition of 3D faces. Contemporary recognition methods and techniques are presented first. After that, the new recognition algorithm is proposed. It is based on the multialgorithmic fusion. The input 3D face scan is processed by the individual recognition units and the final decision about the subject identity is the result of combination of involved recognition unit outputs. Proposed approach has been tested on the publicly available FRGC v 2.0 database as well as on our own databases acquired with the Microsoft Kinect and SoftKinetic DS325 sensors.

Packet Classification Algorithms
Puš, Viktor ; Lhotka,, Ladislav (oponent) ; Dvořák, Václav (vedoucí práce)
This thesis deals with packet classification in computer networks. Classification is the key task in many networking devices, most notably packet filters - firewalls. This thesis therefore concerns the area of computer security. The thesis is focused on high-speed networks with the bandwidth of 100 Gb/s and beyond. General-purpose processors can not be used in such cases, because their performance is not sufficient. Therefore, specialized hardware is used, mainly ASICs and FPGAs. Many packet classification algorithms designed for hardware implementation were presented, yet these approaches are not ready for very high-speed networks. This thesis addresses the design of new high-speed packet classification algorithms, targeted for the implementation in dedicated hardware. The algorithm that decomposes the problem into several easier sub-problems is proposed. The first subproblem is the longest prefix match (LPM) operation, which is used also in IP packet routing. As the LPM algorithms with sufficient speed have already been published, they can be used in out context. The following subproblem is mapping the prefixes to the rule numbers. This is where the thesis brings innovation by using a specifically constructed hash function. This hash function allows the mapping to be done in constant time and requires only one memory with narrow data bus. The algorithm throughput can be determined analytically and is independent on the number of rules or the network traffic characteristics. With the use of available parts the throughput of 266 million packets per second can be achieved. Additional three algorithms (PFCA, PCCA, MSPCCA) that follow in this thesis are designed to lower the memory requirements of the first one without compromising the speed. The second algorithm lowers the memory size by 11 % to 96 %, depending on the rule set. The disadvantage of low stability is removed by the third algorithm, which reduces the memory requirements by 31 % to 84 %, compared to the first one. The fourth algorithm combines the third one with the older approach and thanks to the use of several techniques lowers the memory requirements by 73 % to 99 %.

Subspace Modeling of Prosodic Features for Speaker Verification
Kockmann, Marcel ; Kenny, Patrick (oponent) ; Nöth, Elmar (oponent) ; Černocký, Jan (vedoucí práce)
 The thesis investigates into speaker verification by means of prosodic features. This includes an appropriate representation of speech by measurements of pitch, energy and duration of speech sounds. Two diverse parameterization methods are investigated: the first leads to a low-dimensional well-defined set, the second to a large-scale set of heterogeneous prosodic features. The first part of this work concentrates on the development of so called prosodic contour features. Different modeling techniques are developed and investigated, with a special focus on subspace modeling. The second part focuses on a novel subspace modeling technique for the heterogeneous large-scale prosodic features. The model is theoretically derived and experimentally evaluated on official NIST Speaker Recognition Evaluation tasks. Huge improvements over the current state-of-the-art in prosodic speaker verification were obtained. Eventually, a novel fusion method is presented to elegantly combine the two diverse prosodic systems. This technique can also be used to fuse the higher-level systems with a high-performing cepstral system, leading to further significant improvements.

STATISTICAL LANGUAGE MODELS BASED ON NEURAL NETWORKS
Mikolov, Tomáš ; Zweig, Geoffrey (oponent) ; Hajič,, Jan (oponent) ; Černocký, Jan (vedoucí práce)
Statistical language models are crucial part of many successful applications, such as automatic speech recognition and statistical machine translation (for example well-known Google Translate). Traditional techniques for estimating these models are based on Ngram counts. Despite known weaknesses of N-grams and huge efforts of research communities across many fields (speech recognition, machine translation, neuroscience, artificial intelligence, natural language processing, data compression, psychology etc.), N-grams remained basically the state-of-the-art. The goal of this thesis is to present various architectures of language models that are based on artificial neural networks. Although these models are computationally more expensive than N-gram models, with the presented techniques it is possible to apply them to state-of-the-art systems efficiently. Achieved reductions of word error rate of speech recognition systems are up to 20%, against stateof-the-art N-gram model. The presented recurrent neural network based model achieves the best published performance on well-known Penn Treebank setup.

Optimization of Gaussian Mixture Subspace Models and Related Scoring Algorithms in Speaker Verification
Glembek, Ondřej ; Brummer, Niko (oponent) ; Campbell,, William (oponent) ; Burget, Lukáš (vedoucí práce)
This thesis deals with Gaussian Mixture Subspace Modeling in automatic speaker recognition. The thesis consists of three parts.  In the first part, Joint Factor Analysis (JFA) scoring methods are studied.  The methods differ mainly in how they deal with the channel of the tested utterance.  The general JFA likelihood function is investigated and the methods are compared both in terms of accuracy and speed.  It was found that linear approximation of the log-likelihood function gives comparable results to the full log-likelihood evaluation while simplyfing the formula and dramatically reducing the computation speed. In the second part, i-vector extraction is studied and two simplification methods are proposed. The motivation for this part was to allow for using the state-of-the-art technique on small scale devices and to setup a simple discriminative-training system.  It is shown that, for long utterances, while sacrificing the accuracy, we can get very fast and compact i-vector systems. On a short-utterance(5-second) task, the results of the simplified systems are comparable to the full i-vector extraction. The third part deals with discriminative training in automatic speaker recognition.  Previous work in the field is summarized and---based on the knowledge from the earlier chapters of this work---discriminative training of the i-vector extractor parameters is proposed.  It is shown that discriminative re-training of the i-vector extractor can improve the system if the initial estimation is computed using the generative approach.

Network-wide Security Analysis
de Silva, Hidda Marakkala Gayan Ruchika ; Šafařík,, Jiří (oponent) ; Šlapal, Josef (oponent) ; Švéda, Miroslav (vedoucí práce)
The objective of the research is to model and analyze the effects of dynamic routing protocols. The thesis addresses the analysis of service reachability, configurations, routing and security filters on dynamic networks in the event of device or link failures. The research contains two main sections, namely, modeling and analysis. First section consists of modeling of network topology, protocol behaviors, device configurations and filters. In the modeling, graph algorithms, routing redistribution theory, relational algebra and temporal logics were used. For the analysis of reachability, a modified topology table was introduced. This is a unique centralized table for a given network and invariant for network states. For the analysis of configurations, a constraint-based analysis was developed by using XSD Prolog. Routing and redistribution were analyzed by using routing information bases and for analyzing the filtering rules, a SAT-based decision procedure was incorporated. A part of the analysis was integrated to a simulation tool at OMNeT++ environment. There are several innovations introduced in this thesis. Filtering network graph, modified topology table, general state to reduce the state space, modeling devices as filtering nodes and constraint-based analysis are the key innovations. Abstract network graph, forwarding device model and redistribution with routing information are extensions of the existing research. Finally, it can be concluded that this thesis discusses novel approaches, modeling methods and analysis techniques in the area of dynamic networks. Integration of these methods into a simulation tool will be a very demanding product for the network designers and the administrators.