Národní úložiště šedé literatury Nalezeno 8,765 záznamů.  začátekpředchozí21 - 30dalšíkonec  přejít na záznam: Hledání trvalo 0.82 vteřin. 

Optický rezonátor s nízkou disperzí pro účely délkového senzoru využivající optický frekvenční hřeben
Pravdová, Lenka ; Hucl, Václav ; Lešundák, Adam ; Lazar, Josef ; Číp, Ondřej
Ultra přesná měření délky jsou doménou laserových interferometrů. Na našem pracovišti jsme navrhli a experimentálně ověřili metodu měření s optickým rezonátorem, která využívá širokospektrálního záření optického frekvenčního hřebene. Měřená délka, tj. délka rezonátoru, je pak převedena na hodnotu opakovací frekvence pulsního laseru se synchronizací modů optického frekvenčního hřebene. V našem příspěvku nyní představujeme porovnání absolutní stupnice optického rezonátoru se stupnicí inkrementálního interferometru. Inkrementální interferometr je do sestavy implementován pro provedení požadované verifikace stupnice optického rezonátoru. Dvousvazkový inkrementální interferometr pracuje na vlnové délce 633 nm a měřicí zrcadlo rezonátoru vybavené piezo posuvem je s výhodou použito zároveň i jako zpětný odrážeč pro tento interferometr. Jako markantní chybový signál se zde projevuje periodická nelinearita stupnice inkrementálního interferometru. Relativní rozlišení naší metody tak dosahuje hodnoty až 10-9 při zachování absolutní stupnice měření.\n

Ocenění a posouzení rozvojových možností nemovitých věcí výrobního areálu obvyklou cenou
Hanzlovská, Nikola ; Gardášová, Alena (oponent) ; Hlavinková, Vítězslava (vedoucí práce)
Hlavním cílem diplomové práce je stanovení ceny obvyklé pro výrobní areál pomocí oceňovacích metod. Dílčím cílem bylo určit podle výsledků ocenění, nejlepší možné využití objektu a určení rozvojových možností. Diplomová práce je rozdělena na teoretickou a praktickou část. V teoretické části jsou podrobně rozepsány oceňovací metody, které jsou rozdělené na dvě hlavní části, ocenění podle cenového předpisu a tržní ocenění. V praktické části se věnuji popisu lokality a nemovitých věcí a následně aplikuji oceňovací metody na výrobní areál. Závěrem práce je rekapitulace a analýza výsledků, kde jsou posouzeny rozdíly a určena cena obvyklá. Poslední kapitola je věnována nejlepšímu využití areálu a posouzení rozvojových možností.

Posouzení přínosu rekonverze z hlediska hodnoty církevního objektu ve vybrané lokalitě
Strnková, Markéta ; Klika, Pavel (oponent) ; Hlavinková, Vítězslava (vedoucí práce)
Diplomová práce se zabývá rekonverzí církevního objektu ve vybrané lokalitě. Na začátku jsou v práci popsány církevní objekty a jejich vliv na okolí, struktura římskokatolické církve a vývoj církevního majetku v průběhu historie. To je doplněno o základní pojmy a možné postupy oceňování kulturních památek. V praktické části se zabývám rekonverzí církevního objektu, konkrétně klášterem dominikánů ve Znojmě. Na základě analýzy Znojma a kláštera došlo k navržení možných využití kláštera. Z nich byly vybrány dvě varianty, které byly porovnány. Z těchto dvou variant byla zvolena jedna jako nejvhodnější možnost pro další využití.

New Methods for Increasing Efficiency and Speed of Functional Verification
Zachariášová, Marcela ; Dohnal, Jan (oponent) ; Steininger, Andreas (oponent) ; Kotásek, Zdeněk (vedoucí práce)
In the development of current hardware systems, e.g. embedded systems or computer hardware, new ways how to increase their reliability are highly investigated. One way how to tackle the issue of reliability is to increase the efficiency and the speed of verification processes that are performed in the early phases of the design cycle. In this Ph.D. thesis, the attention is focused on the verification approach called functional verification. Several challenges and problems connected with the efficiency and the speed of functional verification are identified and reflected in the goals of the Ph.D. thesis. The first goal focuses on the reduction of the simulation runtime when verifying complex hardware systems. The reason is that the simulation of inherently parallel hardware systems is very slow in comparison to the speed of real hardware. The optimization technique is proposed that moves the verified system into the FPGA acceleration board while the rest of the verification environment runs in simulation. By this single move, the simulation overhead can be significantly reduced. The second goal deals with manually written verification environments which represent a huge bottleneck in the verification productivity. However, it is not reasonable, because almost all verification environments have the same structure as they utilize libraries of basic components from the standard verification methodologies. They are only adjusted to the system that is verified. Therefore, the second optimization technique takes the high-level specification of the system and then automatically generates a comprehensive verification environment for this system. The third goal elaborates how the completeness of the verification process can be achieved using the intelligent automation. The completeness is measured by different coverage metrics and the verification is usually ended when a satisfying level of coverage is achieved. Therefore, the third optimization technique drives generation of input stimuli in order to activate multiple coverage points in the veri\-fied system and to enhance the overall coverage rate. As the main optimization tool the genetic algorithm is used, which is adopted for the functional verification purposes and its parameters are well-tuned for this domain. It is running in the background of the verification process, it analyses the coverage and it dynamically changes constraints of the stimuli generator. Constraints are represented by the probabilities using which particular values from the input domain are selected.       The fourth goal discusses the re-usability of verification stimuli for regression testing and how these stimuli can be further optimized in order to speed-up the testing. It is quite common in verification that until a satisfying level of coverage is achieved, many redundant stimuli are evaluated as they are produced by pseudo-random generators. However, when creating optimal regression suites, redundancy is not needed anymore and can be removed. At the same time, it is important to retain the same level of coverage in order to check all the key properties of the system. The fourth optimization technique is also based on the genetic algorithm, but it is not integrated into the verification process but works offline after the verification is ended. It removes the redundancy from the original suite of stimuli very fast and effectively so the resulting verification runtime of the regression suite is significantly improved.

Exploitation of GPU in graphics and image processing algorithms
Jošth, Radovan ; Svoboda, David (oponent) ; Trajtel,, Ľudovít (oponent) ; Herout, Adam (vedoucí práce)
This thesis introduces several selected algorithms, which were primarily developed for CPUs, but based on high demand for improvements; we have decided to utilize it on behalf of GPGPU. This modification was at the same time goal of our research. The research itself was performed on CUDA enabled devices. The thesis is divided in accordance with three algorithm’s groups that have been researched: a real-time object detection, spectral image analysis and real-time line detection. The research on real-time object detection was performed by using LRD and LRP features. Research on spectral image analysis was performed by using PCA and NTF algorithms and for the needs of real-time line detection, we have modified accumulation scheme for the Hough transform in two different ways. Prior to explaining particular algorithms and performed research, GPU architecture together with GPGPU overview are provided in second chapter, right after an introduction. Chapter dedicated to research achievements focus on methodology used for the different algorithm modifications and authors’ assess to the research, as well as several products that have been developed during the research. The final part of the thesis concludes our research and provides more information about the research impact.

Subspace Modeling of Prosodic Features for Speaker Verification
Kockmann, Marcel ; Kenny, Patrick (oponent) ; Nöth, Elmar (oponent) ; Černocký, Jan (vedoucí práce)
 The thesis investigates into speaker verification by means of prosodic features. This includes an appropriate representation of speech by measurements of pitch, energy and duration of speech sounds. Two diverse parameterization methods are investigated: the first leads to a low-dimensional well-defined set, the second to a large-scale set of heterogeneous prosodic features. The first part of this work concentrates on the development of so called prosodic contour features. Different modeling techniques are developed and investigated, with a special focus on subspace modeling. The second part focuses on a novel subspace modeling technique for the heterogeneous large-scale prosodic features. The model is theoretically derived and experimentally evaluated on official NIST Speaker Recognition Evaluation tasks. Huge improvements over the current state-of-the-art in prosodic speaker verification were obtained. Eventually, a novel fusion method is presented to elegantly combine the two diverse prosodic systems. This technique can also be used to fuse the higher-level systems with a high-performing cepstral system, leading to further significant improvements.

Vliv rekonverze na hodnotu bývalého církevního objektu na jižní Moravě
Sýkorová, Michaela ; Gardášová, Alena (oponent) ; Hlavinková, Vítězslava (vedoucí práce)
Diplomová práce se zabývá oceněním bývalé fary v obci Přítluky na jižní Moravě. Základní myšlenkou je vyhodnotit vliv, který má změna účelu užívání spojená s rekonstrukcí na hodnotu objektu. A to z pohledu tržního přístupu i platných oceňovacích předpisů. Nejprve je v práci rozebrán teoretický základ dané problematiky jako pojmy, legislativa a postupy oceňování. Následně je charakterizována konkrétní situace lokality a objektu a v poslední části je popsána a zdůvodněna metodika vlastního ocenění. V závěru jsou výsledky vyhodnoceny a okomentovány.

Security of Contactless Smart Card Protocols
Henzl, Martin ; Rosa, Tomáš (oponent) ; Staudek, Jan (oponent) ; Hanáček, Petr (vedoucí práce)
This thesis analyses contactless smart card protocol threats and presents a method of semi-automated vulnerability finding in such protocols using model checking. Designing and implementing secure applications is difficult even when secure hardware is used. High level application specifications may lead to different implementations. It is important to use the smart card correctly, inappropriate protocol implementation may introduce a vulnerability, even if the protocol is secure by itself. The goal of this thesis is to provide a method that can be used by protocol developers to create a model of arbitrary smart card, with focus on contactless smart cards, to create a model of the protocol, and to use model checking to find attacks in this model. The attack can be then executed and if not successful, the model is refined for another model checker run. The AVANTSSAR platform was used for the formal verification, models are written in the ASLan++ language. Examples are provided to demonstrate usability of the proposed method. This method was used to find a weakness of Mifare DESFire contactless smart card. This thesis also deals with threats not possible to cover by the proposed method, such as relay attacks.

OPTIMIZATION OF ALGORITHMS AND DATA STRUCTURES FOR REGULAR EXPRESSION MATCHING USING FPGA TECHNOLOGY
Kaštil, Jan ; Plíva, Zdeněk (oponent) ; Vlček, Karel (oponent) ; Kotásek, Zdeněk (vedoucí práce)
This thesis deals with fast regular expression matching using FPGA. Regular expression matching in high speed computer networks is computationally intensive operation used mostly in the field of the computer network security and in the field of monitoring of the network traffic. Current solutions do not achieve throughput required by modern networks with respect to all requirements placed on the matching unit. Innovative hardware architectures implemented in FPGA or ASIC have the highest throughput. This thesis describes two new architectures suitable for the FPGA and ASIC implementation. The basic idea of these architectures is to use perfect hash function to implement transitional function of deterministic finite automaton. Also, architecture that allows the user to introduce small probability of errors into the matching process in order to reduce memory requirement of the matching unit was introduced. The thesis contains analysis of the effect of these errors to overall reliability of the system and compares it to the reliability of currently used approach. The measurement of properties of regular expressions used in analysis of the traffic in modern computer networks was performed in the thesis. The analysis implies that most of the used regular expressions are suitable for the implementation by proposed architectures. To guarantee high throughput of the matching unit new algorithms for alphabet transformation is proposed. The algorithm allows to transform the automaton to accept several input characters per one transition. The main advantage of the proposed algorithm over currently used solutions is that it does not have any limitation over the number of characters that are accepted at once. Implemented architectures were compared with the current state of the art algorithm and 200MB memory reduction was achieve

Methods for class prediction with high-dimensional gene expression data
Šilhavá, Jana ; Matula, Petr (oponent) ; Železný, Filip (oponent) ; Smrž, Pavel (vedoucí práce)
This thesis deals with class prediction with high-dimensional gene expression data. During the last decade, an increasing amount of genomic data has become available. Combining gene expression data with other data can be useful in clinical management, where it can improve the prediction of disease prognosis. The main part of this thesis is aimed at combining gene expression data with clinical data. We use logistic regression models that can be built through various regularized techniques. Generalized linear models enable us to combine models with different structure of data. It is shown that such a combination may yield more accurate predictions than those obtained based on the use of gene expression or clinical data alone. Suggested approaches are not computationally intensive. Evaluations are performed with simulated data sets in different settings and then with real benchmark data sets. The work also characterizes an additional predictive value of microarrays. The thesis includes a comparison of selected features of gene expression classifiers built up in five different breast cancer data sets. Finally, a feature selection that combines gene expression data with gene ontology information is proposed.