Národní úložiště šedé literatury Nalezeno 36,182 záznamů.  začátekpředchozí21 - 30dalšíkonec  přejít na záznam: Hledání trvalo 0.83 vteřin. 

Diamond coated AlGaN/GaN high electron mobility transistors - effect of deposition process on gate electrode
Vanko, G. ; Ižák, Tibor ; Babchenko, O. ; Kromka, Alexander
We studied the influence of the diamond deposition on the degradation of Schottky gate electrodes (i.e. Ir or IrO2) and on the electrical characteristics of AlGaN/GaN high electron mobility transistors (HEMTs). In present study, the diamond films were selectively deposited on the AlGaN/GaN circular HEMT by focused (ellispoidal cavity reactor) and linear antenna (surface wave) microwave plasma at different temperatures from 400°C to 1100°C. The preliminary results on electrical measurements on the diamond-coated c-HEMTs showed degraded electrical properties comparing to c-HEMTs before deposition process, which was attributed to degradation of the Ir gate electrodes even at temperatures as low as 400°C. On the other hand, metal oxide gate electrode layer (IrO2) can withstand diamond CVD process even at high temperatures (~900°C) which make it suitable for fabrication of all-in-diamond c-HEMT devices for high-power applications.

Frekvenčně stabilizovaný polovodičový laserový zdroj pro interferometrii s vysokým rozlišením
Řeřucha, Šimon ; Hucl, Václav ; Holá, Miroslava ; Čížek, Martin ; Pham, Minh Tuan ; Pravdová, Lenka ; Lazar, Josef ; Číp, Ondřej
Sestavili jsme experimentální laserový systém, založený na laserové diodě typu DBR (Distributed Bragg Reflector), frekvenčně stabilizovaný na absorpční čáry v parách molekulárního jodu. Laserový systém operuje na vlnové délce ležící v blízkosti vlnové délky stabilizovaných helium-neonových (HeNe) laserů (tj. 633 nm), které představují de-facto standardní laserový zdroj v oblasti metrologie délek. Cílem bylo ověřit, že parametry takového systému umožní jej využít jako náhradu právě He-Ne laserů, která navíc umožní větší rozsah a větší šířku pásma přeladění optické frekvence a vyšší výkon. Experimentálně jsme ověřili základní charakteristiky laserového zdroje, které jsme dále porovnali s charakteristikami typického frekvenčně stabilizovaného He-Ne laseru. K tomu jsme využili experimentální uspořádání blízké typickým využitím laserové interferometrie v metrologii délky. Výsledky prokazují, že laserový systém, založený na laserové diodě DBR představuje vhodný zdroj pro aplikace v (nano) metrologii délky tím, že zachovává fundamentální požadavky na laserový zdroj jako frekvenční stabilita a koherenční délka a zároveň umožňuje přeladění optické frekvence o více než 0.5 nm s šířkou pásma modulace až několik MHz, vyšší výkon v řádu několika mW a díky stabilizaci i fundamentální metrologickou návaznost.

Optický rezonátor s nízkou disperzí pro účely délkového senzoru využivající optický frekvenční hřeben
Pravdová, Lenka ; Hucl, Václav ; Lešundák, Adam ; Lazar, Josef ; Číp, Ondřej
Ultra přesná měření délky jsou doménou laserových interferometrů. Na našem pracovišti jsme navrhli a experimentálně ověřili metodu měření s optickým rezonátorem, která využívá širokospektrálního záření optického frekvenčního hřebene. Měřená délka, tj. délka rezonátoru, je pak převedena na hodnotu opakovací frekvence pulsního laseru se synchronizací modů optického frekvenčního hřebene. V našem příspěvku nyní představujeme porovnání absolutní stupnice optického rezonátoru se stupnicí inkrementálního interferometru. Inkrementální interferometr je do sestavy implementován pro provedení požadované verifikace stupnice optického rezonátoru. Dvousvazkový inkrementální interferometr pracuje na vlnové délce 633 nm a měřicí zrcadlo rezonátoru vybavené piezo posuvem je s výhodou použito zároveň i jako zpětný odrážeč pro tento interferometr. Jako markantní chybový signál se zde projevuje periodická nelinearita stupnice inkrementálního interferometru. Relativní rozlišení naší metody tak dosahuje hodnoty až 10-9 při zachování absolutní stupnice měření.\n

Detekce frekvenčního šumu polovodičového laseru pracujícího na vlnové délce 729 nm
Pham, Minh Tuan ; Čížek, Martin ; Hucl, Václav ; Lazar, Josef ; Hrabina, Jan ; Řeřucha, Šimon ; Lešundák, Adam ; Číp, Ondřej
Práce se zabývá analýzou frekvenčního šumu laserové diody s externím rezonátorem (ECDL), pracující na vlnové délce 729 nm. ECDL bude sloužit jako budící laser zakázaného přechodu, zachyceného a zchlazeného, 40Ca+ iontu. Z tohoto důvodu je nutné, aby spektrální šířka tohoto laseru byla v řádu Hz a nižší. Součástí práce je experimentální design sestavy umožňující zúžení spektrální čáry pomocí fázového uzamčení frekvence laseru na vybranou komponentu optického frekvenčního hřebene, kde je šum potlačen rychlým elektronickým regulátorem se servosmyčkou, který řídí vstupní proud laseru.

Noise, Transport and Structural Properties of High Energy Radiation Detectors Based on CdTe
Šik, Ondřej ; Lazar, Josef (oponent) ; Navrátil, Vladislav (oponent) ; Grmela, Lubomír (vedoucí práce)
Because of demands from space research, healthcare and nuclear safety industry, gamma and X-ray imaging and detection is rapidly growing topic of research. CdTe and its alloy CdZnTe are materials that are suitable to detect high energy photons in range from 10 keV to 500 keV. Their 1.46 -1.6 eV band gap gives the possibility of high resistivity (10^10-10^11 cm) crystals production that is high enough for room temperature X-ray detection and imaging. CdTe/CdZnTe detectors under various states of their defectiveness. Investigation of detector grade crystals, crystals with lower resistivity and enhanced polarization, detectors with asymmetry of electrical characteristics and thermally degenerated crystals were subject of my work in terms of analysis of their current stability, additional noise, electric field distribution and structural properties. The results of the noise analysis showed that enhanced concentration of defects resulted into change from monotonous spectrum of 1/f noise to spectrum that showed significant effects of generation-recombination mechanisms. Next important feature of deteriorated quality of investigated samples was higher increase of the noise power spectral density than 2 with increasing applied voltage. Structural and chemical analyses showed diffusion of metal material and trace elements deeper to the crystal bulk. Part of this work is also focused on surface modification by argon ion beam and its effect on chemical and morphological properties of the surface.

New Methods for Increasing Efficiency and Speed of Functional Verification
Zachariášová, Marcela ; Dohnal, Jan (oponent) ; Steininger, Andreas (oponent) ; Kotásek, Zdeněk (vedoucí práce)
In the development of current hardware systems, e.g. embedded systems or computer hardware, new ways how to increase their reliability are highly investigated. One way how to tackle the issue of reliability is to increase the efficiency and the speed of verification processes that are performed in the early phases of the design cycle. In this Ph.D. thesis, the attention is focused on the verification approach called functional verification. Several challenges and problems connected with the efficiency and the speed of functional verification are identified and reflected in the goals of the Ph.D. thesis. The first goal focuses on the reduction of the simulation runtime when verifying complex hardware systems. The reason is that the simulation of inherently parallel hardware systems is very slow in comparison to the speed of real hardware. The optimization technique is proposed that moves the verified system into the FPGA acceleration board while the rest of the verification environment runs in simulation. By this single move, the simulation overhead can be significantly reduced. The second goal deals with manually written verification environments which represent a huge bottleneck in the verification productivity. However, it is not reasonable, because almost all verification environments have the same structure as they utilize libraries of basic components from the standard verification methodologies. They are only adjusted to the system that is verified. Therefore, the second optimization technique takes the high-level specification of the system and then automatically generates a comprehensive verification environment for this system. The third goal elaborates how the completeness of the verification process can be achieved using the intelligent automation. The completeness is measured by different coverage metrics and the verification is usually ended when a satisfying level of coverage is achieved. Therefore, the third optimization technique drives generation of input stimuli in order to activate multiple coverage points in the veri\-fied system and to enhance the overall coverage rate. As the main optimization tool the genetic algorithm is used, which is adopted for the functional verification purposes and its parameters are well-tuned for this domain. It is running in the background of the verification process, it analyses the coverage and it dynamically changes constraints of the stimuli generator. Constraints are represented by the probabilities using which particular values from the input domain are selected.       The fourth goal discusses the re-usability of verification stimuli for regression testing and how these stimuli can be further optimized in order to speed-up the testing. It is quite common in verification that until a satisfying level of coverage is achieved, many redundant stimuli are evaluated as they are produced by pseudo-random generators. However, when creating optimal regression suites, redundancy is not needed anymore and can be removed. At the same time, it is important to retain the same level of coverage in order to check all the key properties of the system. The fourth optimization technique is also based on the genetic algorithm, but it is not integrated into the verification process but works offline after the verification is ended. It removes the redundancy from the original suite of stimuli very fast and effectively so the resulting verification runtime of the regression suite is significantly improved.

Packet Classification Algorithms
Puš, Viktor ; Lhotka,, Ladislav (oponent) ; Dvořák, Václav (vedoucí práce)
This thesis deals with packet classification in computer networks. Classification is the key task in many networking devices, most notably packet filters - firewalls. This thesis therefore concerns the area of computer security. The thesis is focused on high-speed networks with the bandwidth of 100 Gb/s and beyond. General-purpose processors can not be used in such cases, because their performance is not sufficient. Therefore, specialized hardware is used, mainly ASICs and FPGAs. Many packet classification algorithms designed for hardware implementation were presented, yet these approaches are not ready for very high-speed networks. This thesis addresses the design of new high-speed packet classification algorithms, targeted for the implementation in dedicated hardware. The algorithm that decomposes the problem into several easier sub-problems is proposed. The first subproblem is the longest prefix match (LPM) operation, which is used also in IP packet routing. As the LPM algorithms with sufficient speed have already been published, they can be used in out context. The following subproblem is mapping the prefixes to the rule numbers. This is where the thesis brings innovation by using a specifically constructed hash function. This hash function allows the mapping to be done in constant time and requires only one memory with narrow data bus. The algorithm throughput can be determined analytically and is independent on the number of rules or the network traffic characteristics. With the use of available parts the throughput of 266 million packets per second can be achieved. Additional three algorithms (PFCA, PCCA, MSPCCA) that follow in this thesis are designed to lower the memory requirements of the first one without compromising the speed. The second algorithm lowers the memory size by 11 % to 96 %, depending on the rule set. The disadvantage of low stability is removed by the third algorithm, which reduces the memory requirements by 31 % to 84 %, compared to the first one. The fourth algorithm combines the third one with the older approach and thanks to the use of several techniques lowers the memory requirements by 73 % to 99 %.

Subspace Modeling of Prosodic Features for Speaker Verification
Kockmann, Marcel ; Kenny, Patrick (oponent) ; Nöth, Elmar (oponent) ; Černocký, Jan (vedoucí práce)
 The thesis investigates into speaker verification by means of prosodic features. This includes an appropriate representation of speech by measurements of pitch, energy and duration of speech sounds. Two diverse parameterization methods are investigated: the first leads to a low-dimensional well-defined set, the second to a large-scale set of heterogeneous prosodic features. The first part of this work concentrates on the development of so called prosodic contour features. Different modeling techniques are developed and investigated, with a special focus on subspace modeling. The second part focuses on a novel subspace modeling technique for the heterogeneous large-scale prosodic features. The model is theoretically derived and experimentally evaluated on official NIST Speaker Recognition Evaluation tasks. Huge improvements over the current state-of-the-art in prosodic speaker verification were obtained. Eventually, a novel fusion method is presented to elegantly combine the two diverse prosodic systems. This technique can also be used to fuse the higher-level systems with a high-performing cepstral system, leading to further significant improvements.

Acceleration of Object Detection Using Classifiers
Juránek, Roman ; Kälviäinen, Heikki (oponent) ; Sojka, Eduard (oponent) ; Zemčík, Pavel (vedoucí práce)
Detection of objects in computer vision is a complex task. One of most popular and well explored  approaches is the use of statistical classifiers and scanning windows. In this approach, classifiers learned by AdaBoost algorithm (or some modification) are often used as they achieve low error rates, high detection rates and they are suitable for detection in real-time applications. Object detection run-time which uses such classifiers can be implemented by various methods and properties of underlying architecture can be used for speed-up of the detection.  For the purpose of acceleration, graphics hardware, multi-core architectures, SIMD or other means can be used. The detection is often implemented on programmable hardware.  The contribution of this thesis is to introduce an optimization technique which enhances object detection performance with respect to an user defined cost function. The optimization balances computations of previously learned classifiers between two or more run-time implementations in order to minimize the cost function.  The optimization method is verified on a basic example -- division of a classifier to a pre-processing unit implemented in FPGA, and a post-processing unit in standard PC.

On-line Data Analysis Based on Visual Codebooks
Beran, Vítězslav ; Honec, Jozef (oponent) ; Sojka, Eduard (oponent) ; Zemčík, Pavel (vedoucí práce)
This work introduces the new adaptable method for on-line video searching in real-time based on visual codebook. The new method addresses the high computational efficiency and retrieval performance when used on on-line data. The method originates in procedures utilized by static visual codebook techniques. These standard procedures are modified to be able to adapt to changing data. The procedures, that improve the new method adaptability, are dynamic inverse document frequency, adaptable visual codebook and flowing inverted index. The developed adaptable method was evaluated and the presented results show how the adaptable method outperforms the static approaches when evaluating on the video searching tasks. The new adaptable method is based on introduced flowing window concept that defines the ways of selection of data, both for system adaptation and for processing. Together with the concept, the mathematical background is defined to find the best configuration when applying the concept to some new method. The practical application of the adaptable method is particularly in the video processing systems where significant changes of the data domain, unknown in advance, is expected. The method is applicable in embedded systems monitoring and analyzing the broadcasted TV on-line signals in real-time.