National Repository of Grey Literature 8 records found  Search took 0.00 seconds. 
Artificial Neural Networks and Their Usage For Knowledge Extraction
Petříčková, Zuzana ; Mrázová, Iveta (advisor) ; Procházka, Aleš (referee) ; Andrejková, Gabriela (referee)
Title: Artificial Neural Networks and Their Usage For Knowledge Extraction Author: RNDr. Zuzana Petříčková Department: Department of Theoretical Computer Science and Mathema- tical Logic Supervisor: doc. RNDr. Iveta Mrázová, CSc., Department of Theoretical Computer Science and Mathematical Logic Abstract: The model of multi/layered feed/forward neural networks is well known for its ability to generalize well and to find complex non/linear dependencies in the data. On the other hand, it tends to create complex internal structures, especially for large data sets. Efficient solutions to demanding tasks currently dealt with require fast training, adequate generalization and a transparent and simple network structure. In this thesis, we propose a general framework for training of BP/networks. It is based on the fast and robust scaled conjugate gradient technique. This classical training algorithm is enhanced with analytical or approximative sensitivity inhibition during training and enforcement of a transparent in- ternal knowledge representation. Redundant hidden and input neurons are pruned based on internal representation and sensitivity analysis. The performance of the developed framework has been tested on various types of data with promising results. The framework provides a fast training algorithm,...
Multi-layered neural networks and visualization of their structure
Drobný, Michal ; Mrázová, Iveta (advisor) ; Kukačka, Marek (referee)
The model of multi-layered neural networks of the back-propagation type is well-known for their universal approximation capability and even the standard back-propagation training algorithm used for their adjustment often provides results applicable to real-world problems. The present study deals with the issue of the multi-layered neural networks. It describes selected variants of training algorithms, mainly the standard back-propagation training algorithm and the scaled conjugate gradients algorithm, which ranks among the extremely fast second-order algorithms. One of the parts of the present study is also an application for the visualisation of the structure of multi-layered neural networks whose solution is designed with respect to its potential utilization in the education of artificial intelligence. The first part of the study introduces the subject matter and formally describes both algorithms, followed by a short description of other variants of the algorithms and their analysis. The next part discusses the selection of the appropriate programming language for the implementation of the application, specifies the goals and describes the implementation works. The conclusion summarizes the test results of the speed and implementation comparison with the selected noncommercial-based software ENCOG.
Deep neural networks and their implementation
Vojt, Ján ; Mrázová, Iveta (advisor) ; Božovský, Petr (referee)
Deep neural networks represent an effective and universal model capable of solving a wide variety of tasks. This thesis is focused on three different types of deep neural networks - the multilayer perceptron, the convolutional neural network, and the deep belief network. All of the discussed network models are implemented on parallel hardware, and thoroughly tested for various choices of the network architecture and its parameters. The implemented system is accompanied by a detailed documentation of the architectural decisions and proposed optimizations. The efficiency of the implemented framework is confirmed by the results of the performed tests. A significant part of this thesis represents also additional testing of other existing frameworks which support deep neural networks. This comparison indicates superior performance to the tested rival frameworks of multilayer perceptrons and convolutional neural networks. The deep belief network implementation performs slightly better for RBM layers with up to 1000 hidden neurons, but has a noticeably inferior performance for more robust RBM layers when compared to the tested rival framework. Powered by TCPDF (www.tcpdf.org)
Knowledge Extraction with BP-netwoks
Reitermanová, Zuzana
Title: Knowledge Extraction with BP-networks Author: Zuzana Reitermanová Department: Katedra softwarového inženýrství Supervisor: Doc. RNDr. Iveta Mrázová, CSc. Supervisor's e-mail address: mrazova@ksi.ms.mff.cuni.cz Abstract: Multi-layered neural networks of the back-propagation type are well known for their universal approximation capability. Already the stan- dard back-propagation training algorithm used for their adjustment provides often applicable results. However, efficient solutions to complex tasks cur- rently dealt with require a quick convergence and a transparent network structure. This supports both an improved generalization capability of the formed networks and an easier interpretation of their function later on. Var- ious techniques used to optimize the structure of the networks like learning with hints; pruning and sensitivity analysis are expected to impact a bet- ter generalization, too. One of the fast learning algorithms is the conjugate gradient method. In this thesis, we discuss, test and analyze the above-mentioned methods. Then, we derive a new technique combining together the advantages of them. The proposed algorithm is based on the rapid scaled conjugate gradient tech- nique. This classical method is enhanced with the enforcement of a transpar- ent internal knowledge...
Artificial Neural Networks and Their Usage For Knowledge Extraction
Petříčková, Zuzana ; Mrázová, Iveta (advisor) ; Procházka, Aleš (referee) ; Andrejková, Gabriela (referee)
Title: Artificial Neural Networks and Their Usage For Knowledge Extraction Author: RNDr. Zuzana Petříčková Department: Department of Theoretical Computer Science and Mathema- tical Logic Supervisor: doc. RNDr. Iveta Mrázová, CSc., Department of Theoretical Computer Science and Mathematical Logic Abstract: The model of multi/layered feed/forward neural networks is well known for its ability to generalize well and to find complex non/linear dependencies in the data. On the other hand, it tends to create complex internal structures, especially for large data sets. Efficient solutions to demanding tasks currently dealt with require fast training, adequate generalization and a transparent and simple network structure. In this thesis, we propose a general framework for training of BP/networks. It is based on the fast and robust scaled conjugate gradient technique. This classical training algorithm is enhanced with analytical or approximative sensitivity inhibition during training and enforcement of a transparent in- ternal knowledge representation. Redundant hidden and input neurons are pruned based on internal representation and sensitivity analysis. The performance of the developed framework has been tested on various types of data with promising results. The framework provides a fast training algorithm,...
Deep neural networks and their implementation
Vojt, Ján ; Mrázová, Iveta (advisor) ; Božovský, Petr (referee)
Deep neural networks represent an effective and universal model capable of solving a wide variety of tasks. This thesis is focused on three different types of deep neural networks - the multilayer perceptron, the convolutional neural network, and the deep belief network. All of the discussed network models are implemented on parallel hardware, and thoroughly tested for various choices of the network architecture and its parameters. The implemented system is accompanied by a detailed documentation of the architectural decisions and proposed optimizations. The efficiency of the implemented framework is confirmed by the results of the performed tests. A significant part of this thesis represents also additional testing of other existing frameworks which support deep neural networks. This comparison indicates superior performance to the tested rival frameworks of multilayer perceptrons and convolutional neural networks. The deep belief network implementation performs slightly better for RBM layers with up to 1000 hidden neurons, but has a noticeably inferior performance for more robust RBM layers when compared to the tested rival framework. Powered by TCPDF (www.tcpdf.org)
Multi-layered neural networks and visualization of their structure
Drobný, Michal ; Mrázová, Iveta (advisor) ; Kukačka, Marek (referee)
The model of multi-layered neural networks of the back-propagation type is well-known for their universal approximation capability and even the standard back-propagation training algorithm used for their adjustment often provides results applicable to real-world problems. The present study deals with the issue of the multi-layered neural networks. It describes selected variants of training algorithms, mainly the standard back-propagation training algorithm and the scaled conjugate gradients algorithm, which ranks among the extremely fast second-order algorithms. One of the parts of the present study is also an application for the visualisation of the structure of multi-layered neural networks whose solution is designed with respect to its potential utilization in the education of artificial intelligence. The first part of the study introduces the subject matter and formally describes both algorithms, followed by a short description of other variants of the algorithms and their analysis. The next part discusses the selection of the appropriate programming language for the implementation of the application, specifies the goals and describes the implementation works. The conclusion summarizes the test results of the speed and implementation comparison with the selected noncommercial-based software ENCOG.
Knowledge Extraction with BP-netwoks
Reitermanová, Zuzana
Title: Knowledge Extraction with BP-networks Author: Zuzana Reitermanová Department: Katedra softwarového inženýrství Supervisor: Doc. RNDr. Iveta Mrázová, CSc. Supervisor's e-mail address: mrazova@ksi.ms.mff.cuni.cz Abstract: Multi-layered neural networks of the back-propagation type are well known for their universal approximation capability. Already the stan- dard back-propagation training algorithm used for their adjustment provides often applicable results. However, efficient solutions to complex tasks cur- rently dealt with require a quick convergence and a transparent network structure. This supports both an improved generalization capability of the formed networks and an easier interpretation of their function later on. Var- ious techniques used to optimize the structure of the networks like learning with hints; pruning and sensitivity analysis are expected to impact a bet- ter generalization, too. One of the fast learning algorithms is the conjugate gradient method. In this thesis, we discuss, test and analyze the above-mentioned methods. Then, we derive a new technique combining together the advantages of them. The proposed algorithm is based on the rapid scaled conjugate gradient tech- nique. This classical method is enhanced with the enforcement of a transpar- ent internal knowledge...

Interested in being notified about new results for this query?
Subscribe to the RSS feed.