Národní úložiště šedé literatury Nalezeno 3 záznamů.  Hledání trvalo 0.00 vteřin. 
Semi-Supervised Speech-to-Text Recognition with Text-to-Speech Critic
Baskar, Murali Karthick ; Manohar, Vimal (oponent) ; Trmal, Jan (oponent) ; Burget, Lukáš (vedoucí práce)
Sequence-to-sequence automatic speech recognition (ASR) models require large quantities of training data to attain good performance. For this reason, unsupervised and semi-supervised training in seq2seq models have recently witnessed a surge in interest. This work builds upon recent results showing notable improvements in semi-supervised training using cycle-consistency and related techniques. Such techniques derive training procedures and losses able to leverage unpaired speech and/or text data by combining ASR with text-to-speech (TTS) models. This thesis first proposes a new semi-supervised modelling framework combining an end-to-end differentiable ASR->TTS loss with TTS->ASR loss. The method is able to leverage unpaired speech and text data to outperform recently proposed related techniques in terms of word error rate (WER). We provide extensive results analysing the impact of data quantity as well as the contribution of speech and text modalities in recovering errors and show consistent gains across WSJ and LibriSpeech corpora. The thesis also discusses the limitations of the ASR<->TTS model in out-of-domain data conditions. We propose an enhanced ASR<->TTS (EAT) model incorporating two main features: 1) the ASR->TTS pipeline is equipped with a language model reward to penalize the ASR hypotheses before forwarding them to TTS; and 2) speech regularizer trained in unsupervised fashion is introduced in TTS->ASR to correct the synthesized speech before sending it to the ASR model. Training strategies and the effectiveness of the EAT model are explored and compared with augmentation approaches. The results show that EAT reduces the performance gap between supervised and semi-supervised training by absolute WER improvement of 2.6% and 2.7% on LibriSpeech and BABEL respectively.
Speech Enhancement with Cycle-Consistent Neural Networks
Karlík, Pavol ; Černocký, Jan (oponent) ; Žmolíková, Kateřina (vedoucí práce)
Deep neural networks (DNNs) have become a standard approach for solving problems of speech enhancement (SE). The training process of a neural network can be extended by using a second neural network, which learns to insert noise into a clean speech signal. Those two networks can be used in combination with each other to reconstruct clean and noisy speech samples. This thesis focuses on utilizing this technique, called cycle-consistency. Cycle-consistency improves the robustness of a network without modifying the speech-enhancing neural network, as it exposes the SE network to a much larger variety of noisy data. However, this method requires input-target training data pairs, which are not always available. We use generative adversarial networks (GANs) with cycle-consistency constraint to train the network using unpaired data. We perform a large number of experiments using both paired and unpaired training data. Our results have shown that adding cycle-consistency improves the models' performance significantly.
Speech Enhancement with Cycle-Consistent Neural Networks
Karlík, Pavol ; Černocký, Jan (oponent) ; Žmolíková, Kateřina (vedoucí práce)
Deep neural networks (DNNs) have become a standard approach for solving problems of speech enhancement (SE). The training process of a neural network can be extended by using a second neural network, which learns to insert noise into a clean speech signal. Those two networks can be used in combination with each other to reconstruct clean and noisy speech samples. This thesis focuses on utilizing this technique, called cycle-consistency. Cycle-consistency improves the robustness of a network without modifying the speech-enhancing neural network, as it exposes the SE network to a much larger variety of noisy data. However, this method requires input-target training data pairs, which are not always available. We use generative adversarial networks (GANs) with cycle-consistency constraint to train the network using unpaired data. We perform a large number of experiments using both paired and unpaired training data. Our results have shown that adding cycle-consistency improves the models' performance significantly.

Chcete být upozorněni, pokud se objeví nové záznamy odpovídající tomuto dotazu?
Přihlásit se k odběru RSS.