National Repository of Grey Literature 4 records found  Search took 0.01 seconds. 
Artificial Neural Networks and Their Usage For Knowledge Extraction
Petříčková, Zuzana ; Mrázová, Iveta (advisor) ; Procházka, Aleš (referee) ; Andrejková, Gabriela (referee)
Title: Artificial Neural Networks and Their Usage For Knowledge Extraction Author: RNDr. Zuzana Petříčková Department: Department of Theoretical Computer Science and Mathema- tical Logic Supervisor: doc. RNDr. Iveta Mrázová, CSc., Department of Theoretical Computer Science and Mathematical Logic Abstract: The model of multi/layered feed/forward neural networks is well known for its ability to generalize well and to find complex non/linear dependencies in the data. On the other hand, it tends to create complex internal structures, especially for large data sets. Efficient solutions to demanding tasks currently dealt with require fast training, adequate generalization and a transparent and simple network structure. In this thesis, we propose a general framework for training of BP/networks. It is based on the fast and robust scaled conjugate gradient technique. This classical training algorithm is enhanced with analytical or approximative sensitivity inhibition during training and enforcement of a transparent in- ternal knowledge representation. Redundant hidden and input neurons are pruned based on internal representation and sensitivity analysis. The performance of the developed framework has been tested on various types of data with promising results. The framework provides a fast training algorithm,...
Knowledge Extraction with BP-netwoks
Reitermanová, Zuzana
Title: Knowledge Extraction with BP-networks Author: Zuzana Reitermanová Department: Katedra softwarového inženýrství Supervisor: Doc. RNDr. Iveta Mrázová, CSc. Supervisor's e-mail address: mrazova@ksi.ms.mff.cuni.cz Abstract: Multi-layered neural networks of the back-propagation type are well known for their universal approximation capability. Already the stan- dard back-propagation training algorithm used for their adjustment provides often applicable results. However, efficient solutions to complex tasks cur- rently dealt with require a quick convergence and a transparent network structure. This supports both an improved generalization capability of the formed networks and an easier interpretation of their function later on. Var- ious techniques used to optimize the structure of the networks like learning with hints; pruning and sensitivity analysis are expected to impact a bet- ter generalization, too. One of the fast learning algorithms is the conjugate gradient method. In this thesis, we discuss, test and analyze the above-mentioned methods. Then, we derive a new technique combining together the advantages of them. The proposed algorithm is based on the rapid scaled conjugate gradient tech- nique. This classical method is enhanced with the enforcement of a transpar- ent internal knowledge...
Artificial Neural Networks and Their Usage For Knowledge Extraction
Petříčková, Zuzana ; Mrázová, Iveta (advisor) ; Procházka, Aleš (referee) ; Andrejková, Gabriela (referee)
Title: Artificial Neural Networks and Their Usage For Knowledge Extraction Author: RNDr. Zuzana Petříčková Department: Department of Theoretical Computer Science and Mathema- tical Logic Supervisor: doc. RNDr. Iveta Mrázová, CSc., Department of Theoretical Computer Science and Mathematical Logic Abstract: The model of multi/layered feed/forward neural networks is well known for its ability to generalize well and to find complex non/linear dependencies in the data. On the other hand, it tends to create complex internal structures, especially for large data sets. Efficient solutions to demanding tasks currently dealt with require fast training, adequate generalization and a transparent and simple network structure. In this thesis, we propose a general framework for training of BP/networks. It is based on the fast and robust scaled conjugate gradient technique. This classical training algorithm is enhanced with analytical or approximative sensitivity inhibition during training and enforcement of a transparent in- ternal knowledge representation. Redundant hidden and input neurons are pruned based on internal representation and sensitivity analysis. The performance of the developed framework has been tested on various types of data with promising results. The framework provides a fast training algorithm,...
Knowledge Extraction with BP-netwoks
Reitermanová, Zuzana
Title: Knowledge Extraction with BP-networks Author: Zuzana Reitermanová Department: Katedra softwarového inženýrství Supervisor: Doc. RNDr. Iveta Mrázová, CSc. Supervisor's e-mail address: mrazova@ksi.ms.mff.cuni.cz Abstract: Multi-layered neural networks of the back-propagation type are well known for their universal approximation capability. Already the stan- dard back-propagation training algorithm used for their adjustment provides often applicable results. However, efficient solutions to complex tasks cur- rently dealt with require a quick convergence and a transparent network structure. This supports both an improved generalization capability of the formed networks and an easier interpretation of their function later on. Var- ious techniques used to optimize the structure of the networks like learning with hints; pruning and sensitivity analysis are expected to impact a bet- ter generalization, too. One of the fast learning algorithms is the conjugate gradient method. In this thesis, we discuss, test and analyze the above-mentioned methods. Then, we derive a new technique combining together the advantages of them. The proposed algorithm is based on the rapid scaled conjugate gradient tech- nique. This classical method is enhanced with the enforcement of a transpar- ent internal knowledge...

Interested in being notified about new results for this query?
Subscribe to the RSS feed.