National Repository of Grey Literature 2 records found  Search took 0.01 seconds. 
Time-Domain Neural Network Based Speaker Separation
Peška, Jiří ; Černocký, Jan (referee) ; Žmolíková, Kateřina (advisor)
A thesis is about the usage of convolutional neural networks for automatic speech separation in an acoustic environment. The goal is to implement the neural network by following a TasNet architecture in the PyTorch framework, train this network with various values of hyper-parameters, and to compare the quality of separations based on the size of the network. In contrast to older architectures that transformed an input mixture into a time-frequency representation, this architecture uses a convolutional autoencoder, which transforms input mixture into a non-negative representation optimized for a speaker extraction. Separation is achieved by applying the masks, which are estimated in the separation module. This module consists of stacked convolutional blocks with increasing dilation, which helps with modeling of the long-term time dependencies in processed speech. Evaluation of the precision of the network is measured by a signal to distortion (SDR) metric, by a perceptual evaluation of speech quality (PESQ), and the short-time objective intelligibility (STOI). The Wall Street Journal dataset (WSJ0) has been used for training and evaluation. Trained models with various values of hyper-parameters enable us to observe the dependency between the size of the network and SDR value. While smaller network after 60 epochs of training reached 10.8 dB of accuracy, a bigger network reached 12.71 dB.
Time-Domain Neural Network Based Speaker Separation
Peška, Jiří ; Černocký, Jan (referee) ; Žmolíková, Kateřina (advisor)
A thesis is about the usage of convolutional neural networks for automatic speech separation in an acoustic environment. The goal is to implement the neural network by following a TasNet architecture in the PyTorch framework, train this network with various values of hyper-parameters, and to compare the quality of separations based on the size of the network. In contrast to older architectures that transformed an input mixture into a time-frequency representation, this architecture uses a convolutional autoencoder, which transforms input mixture into a non-negative representation optimized for a speaker extraction. Separation is achieved by applying the masks, which are estimated in the separation module. This module consists of stacked convolutional blocks with increasing dilation, which helps with modeling of the long-term time dependencies in processed speech. Evaluation of the precision of the network is measured by a signal to distortion (SDR) metric, by a perceptual evaluation of speech quality (PESQ), and the short-time objective intelligibility (STOI). The Wall Street Journal dataset (WSJ0) has been used for training and evaluation. Trained models with various values of hyper-parameters enable us to observe the dependency between the size of the network and SDR value. While smaller network after 60 epochs of training reached 10.8 dB of accuracy, a bigger network reached 12.71 dB.

See also: similar author names
1 Peška, Jan
5 Peška, Jaroslav
1 Peška, Josef
Interested in being notified about new results for this query?
Subscribe to the RSS feed.