National Repository of Grey Literature 3 records found  Search took 0.00 seconds. 
Recurrent Neural Networks for Speech Recognition
Nováčik, Tomáš ; Karafiát, Martin (referee) ; Veselý, Karel (advisor)
This master thesis deals with the implementation of various types of recurrent neural networks via programming language lua using torch library. It focuses on finding optimal strategy for training recurrent neural networks and also tries to minimize the duration of the training. Furthermore various types of regularization techniques are investigated and implemented into the recurrent neural network architecture. Implemented recurrent neural networks are compared on the speech recognition task using AMI dataset, where they model the acustic information. Their performance is also compared to standard feedforward neural network. Best results are achieved using BLSTM architecture. The recurrent neural network are also trained via CTC objective function on the TIMIT dataset. Best result is again achieved using BLSTM architecture.
Techniques For Avoiding Model Overfitting On Small Dataset
Kratochvila, Lukas
Building a deep learning model based on small dataset is difficult, even impossible. Toavoiding overfitting, we must constrain model, which we train. Techniques as data augmentation,regularization or data normalization could be crucial. We have created a benchmark with a simpleCNN image classifier in order to find the best techniques. As a result, we compare different types ofdata augmentation and weights regularization and data normalization on a small dataset.
Recurrent Neural Networks for Speech Recognition
Nováčik, Tomáš ; Karafiát, Martin (referee) ; Veselý, Karel (advisor)
This master thesis deals with the implementation of various types of recurrent neural networks via programming language lua using torch library. It focuses on finding optimal strategy for training recurrent neural networks and also tries to minimize the duration of the training. Furthermore various types of regularization techniques are investigated and implemented into the recurrent neural network architecture. Implemented recurrent neural networks are compared on the speech recognition task using AMI dataset, where they model the acustic information. Their performance is also compared to standard feedforward neural network. Best results are achieved using BLSTM architecture. The recurrent neural network are also trained via CTC objective function on the TIMIT dataset. Best result is again achieved using BLSTM architecture.

Interested in being notified about new results for this query?
Subscribe to the RSS feed.