National Repository of Grey Literature 41 records found  beginprevious37 - 41  jump to record: Search took 0.01 seconds. 
Implementation of Deep Learning Algorithm on Embedded Device
Ondrášek, David ; Boštík, Ondřej (referee) ; Horák, Karel (advisor)
This thesis deals with the implementation of inference model, based on the methods of deep learning, on embedded device. First, machine learning and deep learning methods are researched with emphasis on state-of-the-art techniques. Next, the best suitable hardware had to be selected. In the conclusion, two devices are chosen: Jetson Nano and Raspberry Pi. Then the custom dataset, consisting of three classes of candies, was created and used for training custom inference model through the transfer learning technique. Model is later used in the application, capable of object detection. Application is implemented on Jetson Nano and Raspberry Pi and then evaluated.
The effect of the background and dataset size on training of neural networks for image classification
Mikulec, Vojtěch ; Kolařík, Martin (referee) ; Rajnoha, Martin (advisor)
This bachelor thesis deals with the impact of background and database size on training of neural networks for image classification. The work describes techniques of image processing using convolutional neural networks and the influence of background (noise) and database size on training. The work proposes methods which can be used to achieve faster and more accurate training process of convolutional neural networks. A binary classification of Labeled Faces in the Wild dataset is selected where the background is modified with color change or cropping for each experiment. The size of dataset is crucial for training convolutional neural networks, there are experiments with the size of training set in this work, which simulate a real problem with the lack of data when training convolutional neural networks for image classification.
Shared Experience in Reinforcement Learning
Mojžíš, Radek ; Šůstek, Martin (referee) ; Hradiš, Michal (advisor)
The aim of this thesis is to use methods of transfer learning for training neural network on a reinforcement learning tasks. As test environment, I am  using old 2D console games, such as space invaders or phoenix. I am testing the impact of re-purposing already trained models for different environments. Next I use methods for domain feature transfer. Lastly i focus on the topic of multi-task learning. From the results we can gain insight into possibilities of using transfer learning for reinforcement learning algorithms.
Reinforcement Learning for 3D Games
Beránek, Michal ; Herout, Adam (referee) ; Hradiš, Michal (advisor)
Thesis deals with neural network learning on simple tasks in 3D shooter Doom, mediated by research platform ViZDoom. The main goal is to create an agent, which is able to learn multiple tasks simultaneously. Reinforcement learning algorithm used to achieve this goal is called Rainbow, which combines several improvements of DQN algorithm. I proposed and experimented with two different architectures of neural network for learning multiple tasks. One of them was successful and after a relatively short period of learning it reached almost 50% of maximum possible reward. The key element of this achievement is an Embedding layer for parametric description of task environment. The main discovery is, that Rainbow is able to learn in 3D environment and with the help of Embedding layer, it is able to learn on multiple tasks simultaneously.
Zkoumání úlohy univerzálního sémantického značkování pomocí neuronových sítí, řešením jiných úloh a vícejazyčným učením
Abdou, Mostafa ; Vidová Hladká, Barbora (advisor) ; Libovický, Jindřich (referee)
July 19, 2018 In this thesis we present an investigation of multi-task and transfer learning using the recently introduced task of semantic tagging. First we employ a number of natural language processing tasks as auxiliaries for semantic tag- ging. Secondly, going in the other direction, we employ seman- tic tagging as an auxiliary task for three di erent NLP tasks: Part-of-Speech Tagging, Universal Dependency parsing, and Natural Language Inference. We compare full neural network sharing, partial neural network sharing, and what we term the learning what to share setting where neg- ative transfer between tasks is less likely. Fi- nally, we investigate multi-lingual learning framed as a special case of multi-task learning. Our ndings show considerable improvements for most experiments, demonstrating a variety of cases where multi-task and transfer learning methods are bene cial. 1 References 2

National Repository of Grey Literature : 41 records found   beginprevious37 - 41  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.