National Repository of Grey Literature 4 records found  Search took 0.00 seconds. 
Robust Speaker Verification
Profant, Ján ; Novotný, Ondřej (referee) ; Matějka, Pavel (advisor)
The goal of this paper is to analyze the impact of codec degraded speech on a state-ofthe-art speaker recognition system. Two feature extraction techniques are analyzed - Mel Frequency Cepstral Coefficients (MFCC) and the state-of-the-art system using Bottleneck features together with MFCC. Speaker recognition system is based on i-vector and Probabilistic Linear Discriminant Analysis (PLDA). We compared scenarios where PLDA is trained only on clean data, then system where we added also noise and reverberant data, and at last, codec degraded speech. We evaluated the systems on the matched conditions (data from the same codec are seen with PLDA) and also mismatched conditions (PLDA does not see any data from the tested codec). We experimented also with recently introduced technique for channel adaptation - Within-class Covariance Correction (WCC). We can see clear benefit of adding transcoded data to PLDA or WCC (with approximately same gain) for both tested conditions (matched and mismatched).
Robust Speaker Verification with Deep Neural Networks
Profant, Ján ; Rohdin, Johan Andréas (referee) ; Matějka, Pavel (advisor)
The objective of this work is to study state-of-the-art deep neural networks based speaker verification systems called x-vectors on various conditions, such as wideband and narrowband data and to develop the system, which is robust to unseen language, specific noise or speech codec. This system takes variable length audio recording and maps it into fixed length embedding which is afterward used to represent the speaker. We compared our systems to BUT's submission to Speakers in the Wild Speaker Recognition Challenge (SITW) from 2016, which used previously popular statistical models - i-vectors. We observed, that when comparing single best systems, with recently published x-vectors we were able to obtain more than 4.38 times lower Equal Error Rate on SITW core-core condition compared to SITW submission from BUT. Moreover, we find that diarization substantially reduces error rate when there are multiple speakers for SITW core-multi condition but we could not see the same trend on NIST SRE 2018 VAST data.
Robust Speaker Verification with Deep Neural Networks
Profant, Ján ; Rohdin, Johan Andréas (referee) ; Matějka, Pavel (advisor)
The objective of this work is to study state-of-the-art deep neural networks based speaker verification systems called x-vectors on various conditions, such as wideband and narrowband data and to develop the system, which is robust to unseen language, specific noise or speech codec. This system takes variable length audio recording and maps it into fixed length embedding which is afterward used to represent the speaker. We compared our systems to BUT's submission to Speakers in the Wild Speaker Recognition Challenge (SITW) from 2016, which used previously popular statistical models - i-vectors. We observed, that when comparing single best systems, with recently published x-vectors we were able to obtain more than 4.38 times lower Equal Error Rate on SITW core-core condition compared to SITW submission from BUT. Moreover, we find that diarization substantially reduces error rate when there are multiple speakers for SITW core-multi condition but we could not see the same trend on NIST SRE 2018 VAST data.
Robust Speaker Verification
Profant, Ján ; Novotný, Ondřej (referee) ; Matějka, Pavel (advisor)
The goal of this paper is to analyze the impact of codec degraded speech on a state-ofthe-art speaker recognition system. Two feature extraction techniques are analyzed - Mel Frequency Cepstral Coefficients (MFCC) and the state-of-the-art system using Bottleneck features together with MFCC. Speaker recognition system is based on i-vector and Probabilistic Linear Discriminant Analysis (PLDA). We compared scenarios where PLDA is trained only on clean data, then system where we added also noise and reverberant data, and at last, codec degraded speech. We evaluated the systems on the matched conditions (data from the same codec are seen with PLDA) and also mismatched conditions (PLDA does not see any data from the tested codec). We experimented also with recently introduced technique for channel adaptation - Within-class Covariance Correction (WCC). We can see clear benefit of adding transcoded data to PLDA or WCC (with approximately same gain) for both tested conditions (matched and mismatched).

Interested in being notified about new results for this query?
Subscribe to the RSS feed.