Národní úložiště šedé literatury Nalezeno 22 záznamů.  předchozí11 - 20další  přejít na záznam: Hledání trvalo 0.01 vteřin. 
Semi-Supervised Training of Deep Neural Networks for Speech Recognition
Veselý, Karel ; Ircing, Pavel (oponent) ; Lamel, Lori (oponent) ; Burget, Lukáš (vedoucí práce)
In this thesis, we first present the theory of neural network training for the speech recognition, along with our implementation, that is available as the 'nnet1' training recipe in the Kaldi toolkit. The recipe contains RBM pre-training, mini-batch frame Cross-Entropy training and sequence-discriminative sMBR training. Then we continue with the main topic of this thesis: semi-supervised training of DNN-based ASR systems. Inspired by the literature survey and our initial experiments, we investigated several problems: First, whether the confidences are better to be calculated per-sentence, per-word or per-frame. Second, whether the confidences should be used for data-selection or data-weighting. Both approaches are compatible with the framework of weighted mini-batch SGD training. Then we tried to get better insight into confidence calibration, more precisely whether it can improve the efficiency of semi-supervised training. We also investigated how the model should be re-tuned with the correctly transcribed data. Finally, we proposed a simple recipe that avoids a grid search of hyper-parameters, and therefore is very practical for general use with any dataset. The experiments were conducted on several data-sets: for Babel Vietnamese with 10 hours of transcribed speech, the Word Error Rate (WER) was reduced by 2.5%. For Switchboard English with 14 hours of transcribed speech, the WER was reduced by 3.2%. Although we found it difficult to further improve the performance of semi-supervised training by means of enhancing the confidences, we still believe that our findings are of significant practical value: the untranscribed data are abundant and easy to obtain, and our proposed solution brings solid WER improvements and it is not difficult to replicate.
Exploiting Uncertainty Information in Speaker Verification and Diarization
Silnova, Anna ; Šmídl, Václav (oponent) ; Villalba Lopez, Jesus Antonio (oponent) ; Burget, Lukáš (vedoucí práce)
This thesis considers two models allowing to utilize uncertainty information in the tasks of Automatic Speaker Verification and Speaker Diarization. The first model we consider is a modification of the widely-used Gaussian Probabilistic Linear Discriminant Analysis (G-PLDA) that models the distribution of the vector utterance representations called embeddings. In G-PLDA, the embeddings are assumed to be generated by adding a noise vector sampled from a Gaussian distribution to a speakerdependent vector. We show that when assuming that the noise was instead sampled from a Student's T-distribution, the PLDA model (we call this version heavy-tailed PLDA) can use the uncertainty information when making the verification decisions. Our model is conceptually similar to the HT-PLDA model defined by Kenny et al. in 2010, but, as we show in this thesis, it allows for fast scoring, while the original HT-PLDA definition requires considerable time and computation resources for scoring. We present the algorithm to train our version of HT-PLDA as a generative model. Also, we consider various strategies for discriminatively training the parameters of the model. We test the performance of generatively and discriminatively trained HT-PLDA on the speaker verification task. The results indicate that HT-PLDA performs on par with the standard G-PLDA while having the advantage of being more robust against variations in the data pre-processing. Experiments on the speaker diarization demonstrate that the HT-PLDA model not only provides better performance than the G-PLDA baseline model but also has the advantage of producing better-calibrated Log-Likelihood Ratio (LLR) scores. In the second model, unlike in HT-PLDA, we do not consider the embeddings as the observed data. Instead, in this model, the embeddings are normally distributed hidden variables. The embedding precision carries the information about the quality of the speech segment: for clean long segments, the precision should be high, and for short and noisy utterances, it should be low. We show how such probabilistic embeddings can be incorporated into the G-PLDA framework and how the parameters of the hidden embedding influence its impact when computing the likelihood with this model. In the experiments, we demonstrate how to utilize an existing neural network (NN) embedding extractor to provide not embeddings but parameters of probabilistic embedding distribution. We test the performance of the probabilistic embeddings model on the speaker diarization task. The results demonstrate that this model provides well-calibrated LLR scores allowing for better diarization when no development dataset is available to tune the clustering algorithm.
Discovering Acoustic Units from Speech: a Bayesian Approach
Ondel, Lucas Antoine Francois ; Häb-Umbach, Reinhold (oponent) ; Glass, Jim (oponent) ; Burget, Lukáš (vedoucí práce)
From an early age, infants show an innate ability to infer linguistic structures from the speech signal long before they learn to read and write. In contrast, modern speech recognition systems require large collections of transcribed data to achieve a low error rate. The relatively recent field of Unsupervised Speech Learning has been dedicated to endow machines with a similar ability. As a part of this ongoing effort, this thesis focuses on the problem of discovering a set of acoustic units from a language given untranscribed audio recordings. Particularly, we explore the potential of Bayesian inference to address this problem. First, we revisit the state-of-the-art non-parametric Bayesian model for the task of acoustic unit discovery and derive a fast and efficient Variational Bayes inference algorithm. Our approach relies on the stick-breaking construction of the Dirichlet Process which allows expressing the model as a Hidden Markov Model-based phone-loop. With this model and a suitable mean-field approximation of the variational posterior, the inference is made with an efficient iterative algorithm similar to the Expectation-Maximization scheme. Experiments show that this approach performs a better clustering than the original model while being orders of magnitude faster. Secondly, we address the problem of defining a meaningful a priori distribution over the potential acoustic units. To do so, we introduce the Generalized Subspace Model, a theoretical framework that allows defining distributions over low-dimensional manifolds in high-dimensional parameter space. Using this tool, we learn a phonetic subspace - a continuum of phone embeddings-from several languages with transcribed recordings. Then, this phonetic subspace is used to constrain our system to discover acoustic units that are similar to phones from other languages. Experimental results show that this approach significantly improves the clustering quality as well as the segmentation accuracy of the acoustic unit discovery system. Finally, we enhance our acoustic units discovery model by using a Hierarchical Dirichlet Process prior instead of the simple Dirichlet Process. By doing so, we introduce a Bayesian bigram phonotactic language model to the acoustic unit discovery system. This approach captures more accurately the phonetic structure of the target language and consequently helps the clustering of the speech signal. Also, to fully exploit the benefits of the phonotactic language model, we derive a modified Variational Bayes algorithm that can balance the preponderance of the role of the acoustic and language model during inference.
Fixed-point implementace rozpoznávače řeči
Král, Tomáš ; Černocký, Jan (oponent) ; Burget, Lukáš (vedoucí práce)
Táto diplomová práca sa zaoberá problematikou automatického rozpoznávania reči na systémoch s obmedzenými hardwarovými prostriedkami - embedded systems. Cieľom projektu je navrhnúť a implementovať systém rozpoznávania reči na embedded systémy, ktoré nedisponujú floating-point výpočetnými jednotkami. V prvom rade bola zvolená vhodná hardwarová architektúra a s ohľadom na dostupné prostriedky, ktorými vybraná architektúra disponuje, bolo navrhnuté riešenie rozpoznávania reči. Jednotlivé časti systému rozpoznávania boli následne v priebehu vývoja optimalizované do takej podoby, aby mohli byť nasadené na zvolený HW. Výsledkom práce je dosiahnutie rozpoznávania českých čísloviek na embedded systéme.
ASL Fingerspelling Recognition Using Slow Feature Analysis
Winkler, Martin ; Hradiš, Michal (oponent) ; Burget, Lukáš (vedoucí práce)
This work describes the process of testing slow feature analysis as a method of extracting rhobust features from complex image data of american sign language. For purposes of testing a system in python is created that facilitates test runs and offers rich scale of changable specifications to allow the user run various tests in order to determine how viable the method is for classification and recognition of hand shapes. The theoretical part introduces the slow feature analysis, discusses the structure of the system and describes the dataset on which the method is to be observed. In practical part the method was subjected to performance analysis on seen and unseen speakers, its viability with higher number of gestures and some interesting input data formatting in attempt to improve the performance.
Semi-Supervised Training of Deep Neural Networks for Speech Recognition
Veselý, Karel ; Ircing, Pavel (oponent) ; Lamel, Lori (oponent) ; Burget, Lukáš (vedoucí práce)
In this thesis, we first present the theory of neural network training for the speech recognition, along with our implementation, that is available as the 'nnet1' training recipe in the Kaldi toolkit. The recipe contains RBM pre-training, mini-batch frame Cross-Entropy training and sequence-discriminative sMBR training. Then we continue with the main topic of this thesis: semi-supervised training of DNN-based ASR systems. Inspired by the literature survey and our initial experiments, we investigated several problems: First, whether the confidences are better to be calculated per-sentence, per-word or per-frame. Second, whether the confidences should be used for data-selection or data-weighting. Both approaches are compatible with the framework of weighted mini-batch SGD training. Then we tried to get better insight into confidence calibration, more precisely whether it can improve the efficiency of semi-supervised training. We also investigated how the model should be re-tuned with the correctly transcribed data. Finally, we proposed a simple recipe that avoids a grid search of hyper-parameters, and therefore is very practical for general use with any dataset. The experiments were conducted on several data-sets: for Babel Vietnamese with 10 hours of transcribed speech, the Word Error Rate (WER) was reduced by 2.5%. For Switchboard English with 14 hours of transcribed speech, the WER was reduced by 3.2%. Although we found it difficult to further improve the performance of semi-supervised training by means of enhancing the confidences, we still believe that our findings are of significant practical value: the untranscribed data are abundant and easy to obtain, and our proposed solution brings solid WER improvements and it is not difficult to replicate.
Finite-state based recognition networks for forward-backward speech decoding
Hannemann, Mirko ; AD, Ralf Schlüter, (oponent) ; Novák,, Miroslav (oponent) ; Burget, Lukáš (vedoucí práce)
Many tasks can be formulated in the mathematical framework of weighted finite state transducers (WFST). This is also the case for automatic speech recognition (ASR). Nowadays, ASR makes extensive use of composed probabilistic models -- called decoding graphs or recognition networks. They are constructed from the individual components via WFST operations like composition. Each component is a probabilistic knowledge source that constrains the search for the best path through the composed graph -- called decoding. The usage of a coherent framework guarantees, that the resulting automata will be optimal in a well-defined sense. WFSTs can be optimized with the help of determinization and minimization in a given semi-ring. The application of these algorithms results in the optimal structure for search and the optimal distribution of weights is achieved by applying a weight pushing algorithm. The goal of this thesis is to further develop the recipes and algorithms for the construction of optimal recognition networks. We introduce an alternative weight pushing algorithm, that is suitable for an important class of models -- language model transducers, or more generally cyclic WFSTs and WFSTs with failure (back-off) transitions. We also present a recipe to construct recognition networks, which are suitable for decoding backwards in time, and which, at the same time, are guaranteed to give exactly the same probabilities as the forward recognition network. For that purpose, we develop an algorithm for exact reversal of back-off language models and their corresponding language model transducers. We apply these backward recognition networks in an optimization technique: In a static network decoder, we use it for a two-pass decoding setup (forward search and backward search). This approach is called tracked decoding and allows to incorporate the first pass decoding into the second pass decoding by tracking hypotheses from the first pass lattice. This technique results in significant speed-ups, since it allows to decode with a variable beam width, which is most of the time much smaller than the baseline beam. We also show that it is possible to apply the algorithms in a dynamic network decoder by using the incrementally refining recognition setup. This additionally leads to a partial parallelization of the decoding.
Optimization of Gaussian Mixture Subspace Models and Related Scoring Algorithms in Speaker Verification
Glembek, Ondřej ; Brummer, Niko (oponent) ; Campbell,, William (oponent) ; Burget, Lukáš (vedoucí práce)
This thesis deals with Gaussian Mixture Subspace Modeling in automatic speaker recognition. The thesis consists of three parts.  In the first part, Joint Factor Analysis (JFA) scoring methods are studied.  The methods differ mainly in how they deal with the channel of the tested utterance.  The general JFA likelihood function is investigated and the methods are compared both in terms of accuracy and speed.  It was found that linear approximation of the log-likelihood function gives comparable results to the full log-likelihood evaluation while simplyfing the formula and dramatically reducing the computation speed. In the second part, i-vector extraction is studied and two simplification methods are proposed. The motivation for this part was to allow for using the state-of-the-art technique on small scale devices and to setup a simple discriminative-training system.  It is shown that, for long utterances, while sacrificing the accuracy, we can get very fast and compact i-vector systems. On a short-utterance(5-second) task, the results of the simplified systems are comparable to the full i-vector extraction. The third part deals with discriminative training in automatic speaker recognition.  Previous work in the field is summarized and---based on the knowledge from the earlier chapters of this work---discriminative training of the i-vector extractor parameters is proposed.  It is shown that discriminative re-training of the i-vector extractor can improve the system if the initial estimation is computed using the generative approach.
Extensions to Probabilistic Linear Discriminant Analysis for Speaker Recognition
Plchot, Oldřich ; Fousek, Petr (oponent) ; McCree,, Alan (oponent) ; Burget, Lukáš (vedoucí práce)
This thesis deals with probabilistic models for automatic speaker verification. In particular, the Probabilistic Linear Discriminant Analysis (PLDA) model, which models i--vector representation of speech utterances, is analyzed in detail. The thesis proposes extensions to the standard state-of-the-art PLDA model. The newly proposed Full Posterior Distribution PLDA  models the uncertainty associated with the i--vector generation process. A new discriminative approach to training the speaker verification system based on the~PLDA model is also proposed. When comparing the original PLDA with the model extended by considering the i--vector uncertainty, results obtained with the extended model show up to 20% relative improvement on tests with short segments of speech. As the test segments get longer (more than one minute), the performance gain of the extended model is lower, but it is never worse than the baseline. Training data are, however, usually  available in the form of segments which are sufficiently long and therefore, in such cases, there is no gain from using the extended model  for training. Instead, the training can be performed with the original PLDA model and the extended model can be used if the task is to test on the short segments. The discriminative classifier is based on classifying pairs of i--vectors into two classes representing target and non-target trials. The functional form for obtaining the score for every i--vector pair is derived from the  PLDA model and training is based on the logistic regression minimizing  the cross-entropy error function  between the correct labeling of all trials and the probabilistic labeling proposed by the system. The results obtained with discriminatively trained system are similar to those obtained with generative baseline, but the discriminative approach shows the ability to output better calibrated scores. This property leads to a  better actual verification performance on an unseen evaluation set, which is an important feature for real use scenarios.
Rozpoznávání řeči pro leteckou komunikaci
Žmolíková, Kateřina ; Burget, Lukáš (oponent) ; Veselý, Karel (vedoucí práce)
Tato bakalářská práce se zabývá rozpoznáváním řeči. Jejím cílem je postavit systém rozpoznávání řeči založený na neuronových sítích a otestovat jej na nahrávkách letecké komunikace. Výsledný akustický model bude použit v projektu A-PiMod. Postavený systém dosáhl na testovacích datech úspěšnost 29.5% WER. Dalším úkolem práce byly experimenty s neuronovými sítěmi, které jsou součástí akustického modelu. První experimenty zkoumaly možnost jejich zjednodušení a urychlení a dopad na úspěšnost rozpoznávání. Další se zabývaly aktivační funkcí rectifier a také konvolučními neuronovými sítěmi. V experimentech s konvolučními neuronovými sítěmi bylo dosáhnuto 1.5% zlepšení a dosáhly tak o 0.4% lepšího výsledku než plně propojená neuronová síť se stejnou architekturou.

Národní úložiště šedé literatury : Nalezeno 22 záznamů.   předchozí11 - 20další  přejít na záznam:
Chcete být upozorněni, pokud se objeví nové záznamy odpovídající tomuto dotazu?
Přihlásit se k odběru RSS.