National Repository of Grey Literature 41 records found  beginprevious32 - 41  jump to record: Search took 0.01 seconds. 
Bayesian transfer learning between autoregressive inference tasks
Barber, Alec ; Quinn, Anthony
Bayesian transfer learning typically relies on a complete stochastic dependence speci cation between source and target learners which allows the opportunity for Bayesian conditioning. We advocate that any requirement for the design or assumption of a full model between target and sources is a restrictive form of transfer learning.
Interactive 3D CT Data Segmentation Based on Deep Learning
Trávníčková, Kateřina ; Hradiš, Michal (referee) ; Kodym, Oldřich (advisor)
This thesis deals with CT data segmentation using convolutional neural nets and describes the problem of training with limited training sets. User interaction is suggested as means of improving segmentation quality for the models trained on small training sets and the possibility of using transfer learning is also considered. All of the chosen methods help improve the segmentation quality in comparison with the baseline method, which is the use of automatic data specific segmentation model. The segmentation has improved by tens of percents in Dice score when trained with very small datasets. These methods can be used, for example, to simplify the creation of a new segmentation dataset.
Low-Resource Neural Machine Translation
Filo, Denis ; Fajčík, Martin (referee) ; Jon, Josef (advisor)
This thesis deals with neural machine translation (NMT) for low-resource languages. The goal was to evaluate current techniques by using the experiments and suggest their improvements. The translation systems in this thesis used the  neural network transformer architecture and were trained by the Marian framework. The selected language pairs were Slovak with Croatian and Slovak with Serbian. The subjects of the experiments were the transfer learning techniques and semi-supervised learning.
Spoken Language Translation via Phoneme Representation of the Source Language
Polák, Peter ; Bojar, Ondřej (advisor) ; Peterek, Nino (referee)
We refactor the traditional two-step approach of automatic speech recognition for spoken language translation. Instead of conventional graphemes, we use phonemes as an intermediate speech representation. Starting with the acoustic model, we revise the cross-lingual transfer and propose a coarse-to-fine method providing further speed-up and performance boost. Further, we review the translation model. We experiment with source and target encoding, boosting the robustness by utilizing the fine-tuning and transfer across ASR and SLT. We empirically document that this conventional setup with an alternative representation not only performs well on standard test sets but also provides robust transcripts and translations on challenging (e.g., non-native) test sets. Notably, our ASR system outperforms commercial ASR systems. 1
Exploring Benefits of Transfer Learning in Neural Machine Translation
Kocmi, Tom ; Bojar, Ondřej (advisor) ; van Genabith, Josef (referee) ; Cuřin, Jan (referee)
Title: Exploring Benefits of Transfer Learning in Neural Machine Translation Author: Tom Kocmi Department: Institute of Formal and Applied Linguistics Supervisor: doc. RNDr. Ondřej Bojar, Ph.D., Institute of Formal and Applied Linguistics Keywords: transfer learning, machine translation, deep neural networks, low-resource languages Abstract: Neural machine translation is known to require large numbers of parallel train- ing sentences, which generally prevent it from excelling on low-resource lan- guage pairs. This thesis explores the use of cross-lingual transfer learning on neural networks as a way of solving the problem with the lack of resources. We propose several transfer learning approaches to reuse a model pretrained on a high-resource language pair. We pay particular attention to the simplicity of the techniques. We study two scenarios: (a) when we reuse the high-resource model without any prior modifications to its training process and (b) when we can prepare the first-stage high-resource model for transfer learning in advance. For the former scenario, we present a proof-of-concept method by reusing a model trained by other researchers. In the latter scenario, we present a method which reaches even larger improvements in translation performance. Apart from proposed techniques, we focus on an...
Implementation of Deep Learning Algorithm on Embedded Device
Ondrášek, David ; Boštík, Ondřej (referee) ; Horák, Karel (advisor)
This thesis deals with the implementation of inference model, based on the methods of deep learning, on embedded device. First, machine learning and deep learning methods are researched with emphasis on state-of-the-art techniques. Next, the best suitable hardware had to be selected. In the conclusion, two devices are chosen: Jetson Nano and Raspberry Pi. Then the custom dataset, consisting of three classes of candies, was created and used for training custom inference model through the transfer learning technique. Model is later used in the application, capable of object detection. Application is implemented on Jetson Nano and Raspberry Pi and then evaluated.
The effect of the background and dataset size on training of neural networks for image classification
Mikulec, Vojtěch ; Kolařík, Martin (referee) ; Rajnoha, Martin (advisor)
This bachelor thesis deals with the impact of background and database size on training of neural networks for image classification. The work describes techniques of image processing using convolutional neural networks and the influence of background (noise) and database size on training. The work proposes methods which can be used to achieve faster and more accurate training process of convolutional neural networks. A binary classification of Labeled Faces in the Wild dataset is selected where the background is modified with color change or cropping for each experiment. The size of dataset is crucial for training convolutional neural networks, there are experiments with the size of training set in this work, which simulate a real problem with the lack of data when training convolutional neural networks for image classification.
Shared Experience in Reinforcement Learning
Mojžíš, Radek ; Šůstek, Martin (referee) ; Hradiš, Michal (advisor)
The aim of this thesis is to use methods of transfer learning for training neural network on a reinforcement learning tasks. As test environment, I am  using old 2D console games, such as space invaders or phoenix. I am testing the impact of re-purposing already trained models for different environments. Next I use methods for domain feature transfer. Lastly i focus on the topic of multi-task learning. From the results we can gain insight into possibilities of using transfer learning for reinforcement learning algorithms.
Reinforcement Learning for 3D Games
Beránek, Michal ; Herout, Adam (referee) ; Hradiš, Michal (advisor)
Thesis deals with neural network learning on simple tasks in 3D shooter Doom, mediated by research platform ViZDoom. The main goal is to create an agent, which is able to learn multiple tasks simultaneously. Reinforcement learning algorithm used to achieve this goal is called Rainbow, which combines several improvements of DQN algorithm. I proposed and experimented with two different architectures of neural network for learning multiple tasks. One of them was successful and after a relatively short period of learning it reached almost 50% of maximum possible reward. The key element of this achievement is an Embedding layer for parametric description of task environment. The main discovery is, that Rainbow is able to learn in 3D environment and with the help of Embedding layer, it is able to learn on multiple tasks simultaneously.
Zkoumání úlohy univerzálního sémantického značkování pomocí neuronových sítí, řešením jiných úloh a vícejazyčným učením
Abdou, Mostafa ; Vidová Hladká, Barbora (advisor) ; Libovický, Jindřich (referee)
July 19, 2018 In this thesis we present an investigation of multi-task and transfer learning using the recently introduced task of semantic tagging. First we employ a number of natural language processing tasks as auxiliaries for semantic tag- ging. Secondly, going in the other direction, we employ seman- tic tagging as an auxiliary task for three di erent NLP tasks: Part-of-Speech Tagging, Universal Dependency parsing, and Natural Language Inference. We compare full neural network sharing, partial neural network sharing, and what we term the learning what to share setting where neg- ative transfer between tasks is less likely. Fi- nally, we investigate multi-lingual learning framed as a special case of multi-task learning. Our ndings show considerable improvements for most experiments, demonstrating a variety of cases where multi-task and transfer learning methods are bene cial. 1 References 2

National Repository of Grey Literature : 41 records found   beginprevious32 - 41  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.