National Repository of Grey Literature 2 records found  Search took 0.00 seconds. 
Feed-forward neural networks and their application in data mining
Civín, Lukáš ; Mrázová, Iveta (advisor) ; Štanclová, Jana (referee)
The goal of data mining is to solve various problems dealing with knowledge extraction from huge amounts of real-world data, the quality of which might be disputable. Neural networks can help with the solution due to their generalization capabilities. While working on data mining projects, we have essentially the following two objectives in real-world applications of feed-forward neural networks. To obtain applicable results, it is crucial to provide the networks with well-prepared data. However, it is equally important to choose the right training strategy for the networks themselves - including network architecture, parameter settings or the training algorithm. One of the most important ideas behind these steps is namely to prevent "over-training". The final network should recall unknown examples as well as possible. There are plenty of techniques with different approaches to the solution. It is possible to modify the data, these comprises modifying the range of the data or its dimension, adding noise to the data, etc. Yet another way is the modification of the neural network by structural learning with forgetting, weight decay or early stopping. These techniques are analyzes both theoretically and experimentally in this thesis. With regard to the results achieved in a number of experimental tests we have...
Feed-forward neural networks and their application in data mining
Civín, Lukáš ; Štanclová, Jana (referee) ; Mrázová, Iveta (advisor)
The goal of data mining is to solve various problems dealing with knowledge extraction from huge amounts of real-world data, the quality of which might be disputable. Neural networks can help with the solution due to their generalization capabilities. While working on data mining projects, we have essentially the following two objectives in real-world applications of feed-forward neural networks. To obtain applicable results, it is crucial to provide the networks with well-prepared data. However, it is equally important to choose the right training strategy for the networks themselves - including network architecture, parameter settings or the training algorithm. One of the most important ideas behind these steps is namely to prevent "over-training". The final network should recall unknown examples as well as possible. There are plenty of techniques with different approaches to the solution. It is possible to modify the data, these comprises modifying the range of the data or its dimension, adding noise to the data, etc. Yet another way is the modification of the neural network by structural learning with forgetting, weight decay or early stopping. These techniques are analyzes both theoretically and experimentally in this thesis. With regard to the results achieved in a number of experimental tests we have...

Interested in being notified about new results for this query?
Subscribe to the RSS feed.