Národní úložiště šedé literatury Nalezeno 59 záznamů.  začátekpředchozí21 - 30dalšíkonec  přejít na záznam: Hledání trvalo 0.00 vteřin. 
Transformer Neural Networks for Handwritten Text Recognition
Vešelíny, Peter ; Beneš, Karel (oponent) ; Kohút, Jan (vedoucí práce)
This Master's thesis aims to design a system using the transformer neural network and perform experiments with this proposed model in the task of handwriting text recognition. In this thesis, a multilingual dataset with predominate Czech texts is used. The experiments examine the influence of basic hyperparameters, such as network size, convolutional encoder type, and the use of different text tokenizers. In this work, I also use text corpora of the Czech language which is used to train the network decoder. Furthermore, I experiment with the usage of additional textual information during the decoding process. This information comes from the previous line of the transcribed image. The transformer achieves a character recognition error rate of 3.41 % on the test data set which is 0.16 % worse performance than the recurrent neural network achieves. To compare this model with other transformer-based models from available articles, the network was trained on the IAM dataset, where it achieved an error of 2.48 % and therefore outperformed other models in handwriting text recognition task.
Recurrent Neural Networks with Elastic Time Context in Language Modeling
Beneš, Karel ; Veselý, Karel (oponent) ; Hannemann, Mirko (vedoucí práce)
This thesis describes an experimental work in the field of statistical language modeling with recurrent neural networks (RNNs). A thorough literature survey on the topic is given, followed by a description of algorithms used for training the respective models. Most of the techniques have been implemented using Theano toolkit. Extensive experiments have been carried out with the Simple Recurrent Network (SRN), which revealed some previously unpublished findings. The best published result has not been replicated in case of static evaluation. In the case of dynamic evaluation, the best published result was outperformed by 1 %. Then, experiments with the Structurally Constrained Recurrent Network have been conducted, but the performance could not be improved over the SRN baseline. Finally, a novel enhancement of the SRN was proposed, leading to a Randomly Sparse RNN (RS-RNN) architecture. This enhancement is based on applying a fixed binary mask on the recurrent connections, thus forcing some recurrent weights to zero. It is empirically confirmed, that RS-RNN models learn the training corpus better and a combination of RS-RNN models achieved a 30% bigger gain on test data than a combination of dense SRN models of same size.
Detekce stresu v řeči
Šoltés, Samuel ; Beneš, Karel (oponent) ; Grézl, František (vedoucí práce)
Stres vplýva na človeka viacerými spôsobmi a môže viesť k poklesnutiu kvality výkonu či kritickým chybám. Detekcia stresu v reči sleduje ako sa prejavujú vplyvy stresu na reči. Cieľom tejto bakalárskej práce je priblížiť vplyvy stresu, zvoliť vhodné parametre rečového signálu, na ktorých by sa vplyvy prejavili, implementovať výpočet týchto parametrov a porovnať ich úspešnosť detekcie stresu. V práci je opísaný stres a vplyv stresorov na človeka; glotálny pulz, spektrum, základná frekvencia a  formanty ako parametre rečového signálu vhodné na analýzu a detekciu stresu; návrh a implementácia výpočtu týchto parametrov a dosiahnuté výsledky výpočtov parametrov na dvoch rôznych databázach.
Image-Based Clustering of Microbial Colonies
Láncoš, Jan ; Kišš, Martin (oponent) ; Beneš, Karel (vedoucí práce)
In-lab analysis of microbial colonies grown on Petri dishes is on the frontier of efforts for total laboratory automation. The core of this issue lies in precise localization of the colonies during image analysis. The state of the art solutions often employ machine learning models. However, these models tend to be heavily reliant on existence of quality labels which leads to a data scarcity problem. The proposed thesis addresses this issue by creation of a sample generator. The robustness of the proposed solution was corroborated by successfully applying the generator both in our segmentation and colony clustering efforts, significantly raising the F1 segmentation score from 0.518 to 0.729 and achieving a subsequent V-measure clustering score of 0.830. This approach to generating synthetic data brings us one step closer towards total laboratory automation.
Artificial Intelligence for the Santorini Board Game
Rybanský, Adam ; Kocour, Martin (oponent) ; Beneš, Karel (vedoucí práce)
The aim of this thesis was to use create an intelligent agent using Reinforcement learning to play Santorini, a 2-player zero-sum board game. The specific algorithm that was implemented was a modified version of Deep Q-learning, with the use of convolutional neural networks (one for training and the other for estimating future Q-value) and a memory of previously executed moves, from which the agent chooses randomly during training. Numerous experiments resulted in 2 final models. One was trained by playing against basic bots, with gradually increasing difficulty. The other was trained by playing against itself from the start. The outcome shows that the model playing against itself produces better results, however both models still perform worse than a bot which uses heuristic function.
Visualizing Neural Network Used as a Language Model
Ryšánek, Jakub ; Černocký, Jan (oponent) ; Beneš, Karel (vedoucí práce)
Long short-term memory (LSTM) network is a type of neural network designed to analyze sequence data. The advantage of LSTM over the simple recurrent neural network is the ability to store long-term dependencies, which allows them to reach higher accuracy when performing tasks such as speech recognition or language modeling. However, due to their complexity, the internal processes that lead to these results are still not fully understood. To explore their inner workings, I created three visualization methods. These methods focus on the pattern of the behavior of the single unit present in the model or the behavior of the whole model when processing words with similar syntactic or semantic meanings.
Fast Discriminative Neural Networks for Text Correction
Chupáč, Sebastián ; Beneš, Karel (oponent) ; Kohút, Jan (vedoucí práce)
The goal of this work is to propose and implement a fast discriminating neural network with only one forward pass, to detect and correct mistakes in text data. Multiple architectures were implemented for detection and correction separately. These models make use of convolution layers, LSTM layers and CTC loss function. Models were trained and evaluated on datasets made from three different text corpora. Experiments and evaluation present the ability of these models to detect and correct mistakes on character level with only one, fast forward pass.
Bilingual Dictionary Based Neural Machine Translation
Tikhonov, Maksim ; Beneš, Karel (oponent) ; Kesiraju, Santosh (vedoucí práce)
The development in the recent few years in the field of machine translation showed us that modern neural machine translation systems are capable of providing results of outstanding quality. However, in order to obtain such a system, one requires an abundant amount of parallel training data, which is not available for most languages. One of the ways to improve the quality of machine translation of low-resource languages is data augmentation. This work investigates the task of Bilingual dictionary-based neural machine translation (BDBNMT), the basis of which is the use of the augmentation technique that allows the generation of noised data based on bilingual dictionaries. My aim was to explore the capabilities of BDBNMT systems on different language pairs and under different initial conditions and then compare the obtained results with those of traditional neural machine translation systems.
Advanced Visualization of Neural Network Training
Kuchta, Samuel ; Kesiraju, Santosh (oponent) ; Beneš, Karel (vedoucí práce)
This work aims to propose visualization methods and analyze with them the phenomena arising during the training of neural networks, based on which new knowledge regarding deep learning could be discovered. In this work, a program was created that tests the impact of training using different techniques and visualizes the training results using different methods. This work presents two methods of visualization of the training process. The first method displays the area around the path of the trained model by averaging the path's points weighted by their distance from the displayed point. The second method is to display step sizes during learning. The result of the work is represented by graphs and a discussion of the phenomena captured by the visualizations.
Transformer Neural Networks for Handwritten Text Recognition
Vešelíny, Peter ; Beneš, Karel (oponent) ; Kohút, Jan (vedoucí práce)
This Master's thesis aims to design a system using the transformer neural network and perform experiments with this proposed model in the task of handwriting text recognition. In this thesis, a multilingual dataset with predominate Czech texts is used. The experiments examine the influence of basic hyperparameters, such as network size, convolutional encoder type, and the use of different text tokenizers. In this work, I also use text corpora of the Czech language which is used to train the network decoder. Furthermore, I experiment with the usage of additional textual information during the decoding process. This information comes from the previous line of the transcribed image. The transformer achieves a character recognition error rate of 3.41 % on the test data set which is 0.16 % worse performance than the recurrent neural network achieves. To compare this model with other transformer-based models from available articles, the network was trained on the IAM dataset, where it achieved an error of 2.48 % and therefore outperformed other models in handwriting text recognition task.

Národní úložiště šedé literatury : Nalezeno 59 záznamů.   začátekpředchozí21 - 30dalšíkonec  přejít na záznam:
Viz též: podobná jména autorů
9 BENEŠ, Karel
1 Beneš, K.
1 Beneš, Kamil
9 Beneš, Karel
Chcete být upozorněni, pokud se objeví nové záznamy odpovídající tomuto dotazu?
Přihlásit se k odběru RSS.