Národní úložiště šedé literatury Nalezeno 2 záznamů.  Hledání trvalo 0.00 vteřin. 
Gaussian Processes Based Hyper-Optimization of Neural Networks
Coufal, Martin ; Landini, Federico Nicolás (oponent) ; Beneš, Karel (vedoucí práce)
The goal of this thesis is to create a lightweight toolkit for artificial neural network hyper-parameter optimisation. The optimisation toolkit has to be able to optimise multiple, possibly correlated hyper-parameters. I solved this problem by creating an optimiser that uses Gaussian processes to predict the influence of the hyper-parameters on the resulting neural network accuracy. Based on the experiments on multiple benchmark functions, the toolkit is able to provide better results than random search optimisation and thus reduce the number of necessary optimisation steps. The random search optimisation provided better results only in the first few optimisation steps before Gaussian process optimisation creates sufficient model of the problem. However the experiments on MNIST dataset show that random optimisation achieves almost always better results than used GP optimiser. These differences between the experiments results are probably caused by insufficient complexity of the benchmarks or by selected parameters of the implemented optimiser.
Gaussian Processes Based Hyper-Optimization of Neural Networks
Coufal, Martin ; Landini, Federico Nicolás (oponent) ; Beneš, Karel (vedoucí práce)
The goal of this thesis is to create a lightweight toolkit for artificial neural network hyper-parameter optimisation. The optimisation toolkit has to be able to optimise multiple, possibly correlated hyper-parameters. I solved this problem by creating an optimiser that uses Gaussian processes to predict the influence of the hyper-parameters on the resulting neural network accuracy. Based on the experiments on multiple benchmark functions, the toolkit is able to provide better results than random search optimisation and thus reduce the number of necessary optimisation steps. The random search optimisation provided better results only in the first few optimisation steps before Gaussian process optimisation creates sufficient model of the problem. However the experiments on MNIST dataset show that random optimisation achieves almost always better results than used GP optimiser. These differences between the experiments results are probably caused by insufficient complexity of the benchmarks or by selected parameters of the implemented optimiser.

Chcete být upozorněni, pokud se objeví nové záznamy odpovídající tomuto dotazu?
Přihlásit se k odběru RSS.