Národní úložiště šedé literatury Nalezeno 2 záznamů.  Hledání trvalo 0.00 vteřin. 
STATISTICAL LANGUAGE MODELS BASED ON NEURAL NETWORKS
Mikolov, Tomáš ; Zweig, Geoffrey (oponent) ; Hajič,, Jan (oponent) ; Černocký, Jan (vedoucí práce)
Statistical language models are crucial part of many successful applications, such as automatic speech recognition and statistical machine translation (for example well-known Google Translate). Traditional techniques for estimating these models are based on Ngram counts. Despite known weaknesses of N-grams and huge efforts of research communities across many fields (speech recognition, machine translation, neuroscience, artificial intelligence, natural language processing, data compression, psychology etc.), N-grams remained basically the state-of-the-art. The goal of this thesis is to present various architectures of language models that are based on artificial neural networks. Although these models are computationally more expensive than N-gram models, with the presented techniques it is possible to apply them to state-of-the-art systems efficiently. Achieved reductions of word error rate of speech recognition systems are up to 20%, against stateof-the-art N-gram model. The presented recurrent neural network based model achieves the best published performance on well-known Penn Treebank setup.
STATISTICAL LANGUAGE MODELS BASED ON NEURAL NETWORKS
Mikolov, Tomáš ; Zweig, Geoffrey (oponent) ; Hajič,, Jan (oponent) ; Černocký, Jan (vedoucí práce)
Statistical language models are crucial part of many successful applications, such as automatic speech recognition and statistical machine translation (for example well-known Google Translate). Traditional techniques for estimating these models are based on Ngram counts. Despite known weaknesses of N-grams and huge efforts of research communities across many fields (speech recognition, machine translation, neuroscience, artificial intelligence, natural language processing, data compression, psychology etc.), N-grams remained basically the state-of-the-art. The goal of this thesis is to present various architectures of language models that are based on artificial neural networks. Although these models are computationally more expensive than N-gram models, with the presented techniques it is possible to apply them to state-of-the-art systems efficiently. Achieved reductions of word error rate of speech recognition systems are up to 20%, against stateof-the-art N-gram model. The presented recurrent neural network based model achieves the best published performance on well-known Penn Treebank setup.

Chcete být upozorněni, pokud se objeví nové záznamy odpovídající tomuto dotazu?
Přihlásit se k odběru RSS.