National Repository of Grey Literature 3 records found  Search took 0.01 seconds. 
Modelling of Neural Network Hardware Accelerators
Klhůfek, Jan ; Sekanina, Lukáš (referee) ; Mrázek, Vojtěch (advisor)
The aim of this master thesis is modeling of neural network accelerators with HW support for quantization. The thesis first focuses on the concept of computation in convolutional neural networks (CNNs) and introduces different categories of hardware architectures that are used for their processing. Following this, optimization techniques for CNN models are summarized, with the goal of achieving efficient processing on specialized hardware architectures. The subsequent part of the thesis involves a comparison of existing analytical tools that are used to estimate hardware performance parameters during inference and which can be expanded to incorporate quantization support. Based on an experimental comparison, the Timeloop tool was selected for the purposes of this thesis. A thorough explanation of this tool's functionality is presented, along with a concept and implementation of its expansion to support quantization. In conclusion, the thesis experimentally tests the impact of various quantization configurations on evaluated inference parameters across different hardware architectures.
Artificial Neural Networks and Their Usage For Knowledge Extraction
Petříčková, Zuzana ; Mrázová, Iveta (advisor) ; Procházka, Aleš (referee) ; Andrejková, Gabriela (referee)
Title: Artificial Neural Networks and Their Usage For Knowledge Extraction Author: RNDr. Zuzana Petříčková Department: Department of Theoretical Computer Science and Mathema- tical Logic Supervisor: doc. RNDr. Iveta Mrázová, CSc., Department of Theoretical Computer Science and Mathematical Logic Abstract: The model of multi/layered feed/forward neural networks is well known for its ability to generalize well and to find complex non/linear dependencies in the data. On the other hand, it tends to create complex internal structures, especially for large data sets. Efficient solutions to demanding tasks currently dealt with require fast training, adequate generalization and a transparent and simple network structure. In this thesis, we propose a general framework for training of BP/networks. It is based on the fast and robust scaled conjugate gradient technique. This classical training algorithm is enhanced with analytical or approximative sensitivity inhibition during training and enforcement of a transparent in- ternal knowledge representation. Redundant hidden and input neurons are pruned based on internal representation and sensitivity analysis. The performance of the developed framework has been tested on various types of data with promising results. The framework provides a fast training algorithm,...
Artificial Neural Networks and Their Usage For Knowledge Extraction
Petříčková, Zuzana ; Mrázová, Iveta (advisor) ; Procházka, Aleš (referee) ; Andrejková, Gabriela (referee)
Title: Artificial Neural Networks and Their Usage For Knowledge Extraction Author: RNDr. Zuzana Petříčková Department: Department of Theoretical Computer Science and Mathema- tical Logic Supervisor: doc. RNDr. Iveta Mrázová, CSc., Department of Theoretical Computer Science and Mathematical Logic Abstract: The model of multi/layered feed/forward neural networks is well known for its ability to generalize well and to find complex non/linear dependencies in the data. On the other hand, it tends to create complex internal structures, especially for large data sets. Efficient solutions to demanding tasks currently dealt with require fast training, adequate generalization and a transparent and simple network structure. In this thesis, we propose a general framework for training of BP/networks. It is based on the fast and robust scaled conjugate gradient technique. This classical training algorithm is enhanced with analytical or approximative sensitivity inhibition during training and enforcement of a transparent in- ternal knowledge representation. Redundant hidden and input neurons are pruned based on internal representation and sensitivity analysis. The performance of the developed framework has been tested on various types of data with promising results. The framework provides a fast training algorithm,...

Interested in being notified about new results for this query?
Subscribe to the RSS feed.