National Repository of Grey Literature 55 records found  beginprevious21 - 30nextend  jump to record: Search took 0.02 seconds. 
Automatic Text Recognition for Robots
Hartman, Zdeněk ; Materna, Zdeněk (referee) ; Španěl, Michal (advisor)
This bachelor thesis describe module design for text detection and recognition for use in robotic systems. To detect charters is used Stroke Width transform, which is applied on the input edge image. In the output image after Stroke Width transform are found connected components. For letter grouping into a words is used Hough transform, which is applied on the created binary image. This image contains points, which corresponding positions of found connected components. To recognize signs in detected areas is used Tesseract library. Before recognition detected areas are extracted and rotated into a horizontal position. This proposed detector can detect even rotated text.  Accuracy of detection of the text is 75% above the test set "informační tabule".
Transformer Neural Networks for Handwritten Text Recognition
Vešelíny, Peter ; Beneš, Karel (referee) ; Kohút, Jan (advisor)
This Master's thesis aims to design a system using the transformer neural network and perform experiments with this proposed model in the task of handwriting text recognition. In this thesis, a multilingual dataset with predominate Czech texts is used. The experiments examine the influence of basic hyperparameters, such as network size, convolutional encoder type, and the use of different text tokenizers. In this work, I also use text corpora of the Czech language which is used to train the network decoder. Furthermore, I experiment with the usage of additional textual information during the decoding process. This information comes from the previous line of the transcribed image. The transformer achieves a character recognition error rate of 3.41 % on the test data set which is 0.16 % worse performance than the recurrent neural network achieves. To compare this model with other transformer-based models from available articles, the network was trained on the IAM dataset, where it achieved an error of 2.48 % and therefore outperformed other models in handwriting text recognition task.
Convolutional Networks for Historic Text Recognition
Kišš, Martin ; Zemčík, Pavel (referee) ; Hradiš, Michal (advisor)
The aim of this work is to create a tool for automatic transcription of historical documents. The work is mainly focused on the recognition of texts from the period of modern times written using font Fraktur. The problem is solved with a newly designed recurrent convolutional neural networks and a Spatial Transformer Network. Part of the solution is also an implemented generator of artificial historical texts. Using this generator, an artificial data set is created on which the convolutional neural network for line recognition is trained. This network is then tested on real historical lines of text on which the network achieves up to 89.0 % of character accuracy. The contribution of this work is primarily the newly designed neural network for text line recognition and the implemented artificial text generator, with which it is possible to train the neural network to recognize real historical lines of text.
Convolutional Networks for Historic Text Recognition
Vešelíny, Peter ; Kolář, Martin (referee) ; Kišš, Martin (advisor)
This thesis deals with text line recognition of historical documents. Historical texts dating back to the 17th - 19th centuries are written in fraktur typeface. The character recognition problem is solved using neural network architecture called sequence-to-sequence . This architecture is based on encoder-decoder model and contains attention mechanism. In this thesis a dataset, from texts originated from German archiv called Deutsches Textarchiv , was created. This archive contains 3 897 different German books that have available transcripts and corresponding images of pages. The created dataset was used to train and experiment with the proposed neural network. During the experiments, several convolutional models, hyperparameters and the effects of positional embedding were investigated. The final tool can recognize characters with accuracy 99,63 %. The contribution of this work is the~mentioned dataset and neural network, which can be used to recognize historical documents.
Active Learning for OCR
Kohút, Jan ; Kolář, Martin (referee) ; Hradiš, Michal (advisor)
The aim of this Master's thesis is to design methods of active learning and to experiment with datasets of historical documents. A large and diverse dataset IMPACT of more than one million lines is used for experiments. I am using neural networks to check the readability of lines and correctness of their annotations. Firstly, I compare architectures of convolutional and recurrent neural networks with bidirectional LSTM layer. Next, I study different ways of learning neural networks using methods of active learning. Mainly I use active learning to adapt neural networks to documents that the neural networks do not have in the original training dataset. Active learning is thus used for picking appropriate adaptation data. Convolutional neural networks achieve 98.6\% accuracy, recurrent neural networks achieve 99.5\% accuracy. Active learning decreases error by 26\% compared to random pick of adaptations data.
Digitization of Handwritten Chess Game Sheets
Šiška, Krištof ; Vaško, Marek (referee) ; Španěl, Michal (advisor)
Chess is one of the most popular board games in the world. An enormous amount of chess games are played daily and its popularity is still on the rise. When playing live chess games, transcripts of the chess matches are created as chess records, also known as chess score sheets. Transcribing these score sheets into digital format is a tedious and time-consuming task. The time spent on transcription increases exponentially if the handwriting is illegible or if the game contains a large number of moves. This work focuses on the problem of transcribing chess score sheets into digital format and reducing the amount of time spent by humans on this necessary but often tedious task in many areas.
Transformer Neural Networks for Handwritten Text Recognition
Vešelíny, Peter ; Beneš, Karel (referee) ; Kohút, Jan (advisor)
This Master's thesis aims to design a system using the transformer neural network and perform experiments with this proposed model in the task of handwriting text recognition. In this thesis, a multilingual dataset with predominate Czech texts is used. The experiments examine the influence of basic hyperparameters, such as network size, convolutional encoder type, and the use of different text tokenizers. In this work, I also use text corpora of the Czech language which is used to train the network decoder. Furthermore, I experiment with the usage of additional textual information during the decoding process. This information comes from the previous line of the transcribed image. The transformer achieves a character recognition error rate of 3.41 % on the test data set which is 0.16 % worse performance than the recurrent neural network achieves. To compare this model with other transformer-based models from available articles, the network was trained on the IAM dataset, where it achieved an error of 2.48 % and therefore outperformed other models in handwriting text recognition task.
Multi-Modal Text Recognition
Kabáč, Michal ; Herout, Adam (referee) ; Kišš, Martin (advisor)
The aim of this thesis is to describe and create a method for correcting text recognizer outputs using speech recognition. The thesis presents an overview of current methods for text and speech recognition using neural networks. It also presents a few existing methods of connecting the outputs of two modalities. Within the thesis, several approaches for the correction of recognizers, which are based on algorithms or neural networks, are designed and implemented. An algorithm based on the principle of searching the outputs of recognizers using levenshtain alignment was proven to be the best approach. It scans the outputs, if the uncertainty of the text recognizer character is less than the pre-selected limit. As part of the work, an annotation server was created for the text transcripts, which was used to collect recordings for the evaluation of experiments.
Improving Consistency in Text Recognition Datasets
Tvarožný, Matúš ; Hradiš, Michal (referee) ; Kišš, Martin (advisor)
This work is concerned with increasing the consistency of datasets for text recognition. This paper describes the problems that cause the inconsistency and then presents solutions to eliminate it. The effect of the properties of the polygons defining the text line boundaries and hence how the modified version of the dataset, which is composed of ideal text line variants, affected the accuracy of the model is investigated. Further, the work focuses on detecting and then removing or modifying text lines whose ground truth transcription does not match the actual text they contain. Experimentation showed that removing the visual inconsistency on the training set did not have a significant effect on the trained model, but modifying the test set improved the OCR accuracy of the model by 1.1\% CER. By modifying the dataset so that it did not contain mutually inconsistent pairs of recognized text and the corresponding ground truth, the model improved by a maximum of only 0.2\% CER after re-training. The main finding of this work is, above all, the proven beneficial effect of removing inconsistencies on test suites, thanks to which it is possible to determine a more realistic error rate of the OCR model.
Data extraction from document scans
Macháč, Bohuslav ; Kolomazník, Jan (advisor) ; Krajíček, Václav (referee)
In this work I developed an application capable of extracting data from scanned documents. For optical character recognition, I used external OCR engine Tesseract, but it can be easily changed. I use document templates, which have informations about data areas and its data types. I tried to automatize most of the steps which are required to extract data or create new data template. User can improve or change results of these steps. For export from application I implemented components, which export data to XML, HTML or plain text. Another components can be easily added, to adapt application for various uses.

National Repository of Grey Literature : 55 records found   beginprevious21 - 30nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.