National Repository of Grey Literature 4 records found  Search took 0.00 seconds. 
Activity of Neural Network in Hidden Layers - Visualisation and Analysis
Fábry, Marko ; Grézl, František (referee) ; Karafiát, Martin (advisor)
Goal of this work was to create system capable of visualisation of activation function values, which were produced by neurons placed in hidden layers of neural networks used for speech recognition. In this work are also described experiments comparing methods for visualisation, visualisations of neural networks with different architectures and neural networks trained with different types of input data. Visualisation system implemented in this work is based on previous work of Mr. Khe Chai Sim and extended with new methods of data normalization. Kaldi toolkit was used for neural network training data preparation. CNTK framework was used for neural network training. Core of this work - the visualisation system was implemented in scripting language Python.
Neural Network Letter Recognition
Kluknavský, František ; Hradiš, Michal (referee) ; Šilhavá, Jana (advisor)
This work uses handwritten character recognition as a model problem for using multilayer perceptron, error backpropagation learning algorithm and finding their optimal parameters, hidden layer size, learning rate and length, ability to handle damaged data. Results were acquired by repeated simulation and testing the neural network using 52,152 English lowercase letters. Best results, smallest network and shortest learning time was at 60 neurons in the hidden layer and learning rate of 0.01. Bigger networks achieved the same ability to recognize unknown patterns and higher robustness at highly damaged data processing.
Activity of Neural Network in Hidden Layers - Visualisation and Analysis
Fábry, Marko ; Grézl, František (referee) ; Karafiát, Martin (advisor)
Goal of this work was to create system capable of visualisation of activation function values, which were produced by neurons placed in hidden layers of neural networks used for speech recognition. In this work are also described experiments comparing methods for visualisation, visualisations of neural networks with different architectures and neural networks trained with different types of input data. Visualisation system implemented in this work is based on previous work of Mr. Khe Chai Sim and extended with new methods of data normalization. Kaldi toolkit was used for neural network training data preparation. CNTK framework was used for neural network training. Core of this work - the visualisation system was implemented in scripting language Python.
Neural Network Letter Recognition
Kluknavský, František ; Hradiš, Michal (referee) ; Šilhavá, Jana (advisor)
This work uses handwritten character recognition as a model problem for using multilayer perceptron, error backpropagation learning algorithm and finding their optimal parameters, hidden layer size, learning rate and length, ability to handle damaged data. Results were acquired by repeated simulation and testing the neural network using 52,152 English lowercase letters. Best results, smallest network and shortest learning time was at 60 neurons in the hidden layer and learning rate of 0.01. Bigger networks achieved the same ability to recognize unknown patterns and higher robustness at highly damaged data processing.

Interested in being notified about new results for this query?
Subscribe to the RSS feed.