National Repository of Grey Literature 2 records found  Search took 0.02 seconds. 
Knowledge Extraction with BP-netwoks
Reitermanová, Zuzana
Title: Knowledge Extraction with BP-networks Author: Zuzana Reitermanová Department: Katedra softwarového inženýrství Supervisor: Doc. RNDr. Iveta Mrázová, CSc. Supervisor's e-mail address: mrazova@ksi.ms.mff.cuni.cz Abstract: Multi-layered neural networks of the back-propagation type are well known for their universal approximation capability. Already the stan- dard back-propagation training algorithm used for their adjustment provides often applicable results. However, efficient solutions to complex tasks cur- rently dealt with require a quick convergence and a transparent network structure. This supports both an improved generalization capability of the formed networks and an easier interpretation of their function later on. Var- ious techniques used to optimize the structure of the networks like learning with hints; pruning and sensitivity analysis are expected to impact a bet- ter generalization, too. One of the fast learning algorithms is the conjugate gradient method. In this thesis, we discuss, test and analyze the above-mentioned methods. Then, we derive a new technique combining together the advantages of them. The proposed algorithm is based on the rapid scaled conjugate gradient tech- nique. This classical method is enhanced with the enforcement of a transpar- ent internal knowledge...
Knowledge Extraction with BP-netwoks
Reitermanová, Zuzana
Title: Knowledge Extraction with BP-networks Author: Zuzana Reitermanová Department: Katedra softwarového inženýrství Supervisor: Doc. RNDr. Iveta Mrázová, CSc. Supervisor's e-mail address: mrazova@ksi.ms.mff.cuni.cz Abstract: Multi-layered neural networks of the back-propagation type are well known for their universal approximation capability. Already the stan- dard back-propagation training algorithm used for their adjustment provides often applicable results. However, efficient solutions to complex tasks cur- rently dealt with require a quick convergence and a transparent network structure. This supports both an improved generalization capability of the formed networks and an easier interpretation of their function later on. Var- ious techniques used to optimize the structure of the networks like learning with hints; pruning and sensitivity analysis are expected to impact a bet- ter generalization, too. One of the fast learning algorithms is the conjugate gradient method. In this thesis, we discuss, test and analyze the above-mentioned methods. Then, we derive a new technique combining together the advantages of them. The proposed algorithm is based on the rapid scaled conjugate gradient tech- nique. This classical method is enhanced with the enforcement of a transpar- ent internal knowledge...

Interested in being notified about new results for this query?
Subscribe to the RSS feed.