National Repository of Grey Literature 4 records found  Search took 0.00 seconds. 
Emotional State Recognition Based on Speech Signal Analysis
Čermák, Jan ; Atassi, Hicham (referee) ; Smékal, Zdeněk (advisor)
The thesis is focused on the emotional states classification in the Matlab program, using neural networks and the classifier which is based on a combination of Gaussian density functions. It deals with the speech signal processing; the prosodic and spectral signs and the MFCC coefficients were extracted from the signal. The work also deals with the quality evaluation of individual signs of which the most suitable were chosen in order to provide the correct classification of emotional states. In order to identify the emotional states, two different methods were used. The first method of classification was the use of neural networks with differently selected parameters, and the second method was the use of the Gaussian mixture model (GMM). In both methods, a database of emotional utterances was divided into the training group and the test group. The testing was based on a method independent of the speaker. The work also includes the comparison of individual analyzed methods as well as the representation and comparison of the results. The conclusion comprises a proposition for the best parameters and the best classifier for the recognition of the speaker’s emotional state.
Acoustic Scene Classification from Speech
Dobrotka, Matúš ; Glembek, Ondřej (referee) ; Matějka, Pavel (advisor)
The topic of this thesis is an audio recording classification with 15 different acoustic scene classes that represent common scenes and places where people are situated on a regular basis. The thesis describes 2 approaches based on GMM and i-vectors and a fusion of the both approaches. The score of the best GMM system which was evaluated on the evaluation dataset of the DCASE Challenge is 60.4%. The best i-vector system's score is 68.4%. The fusion of the GMM system and the best i-vector system achieves score of 69.3%, which would lead to the 20th place in the all systems ranking of the DCASE 2017 Challenge (among 98 submitted systems from all over the world).
Acoustic Scene Classification from Speech
Dobrotka, Matúš ; Glembek, Ondřej (referee) ; Matějka, Pavel (advisor)
The topic of this thesis is an audio recording classification with 15 different acoustic scene classes that represent common scenes and places where people are situated on a regular basis. The thesis describes 2 approaches based on GMM and i-vectors and a fusion of the both approaches. The score of the best GMM system which was evaluated on the evaluation dataset of the DCASE Challenge is 60.4%. The best i-vector system's score is 68.4%. The fusion of the GMM system and the best i-vector system achieves score of 69.3%, which would lead to the 20th place in the all systems ranking of the DCASE 2017 Challenge (among 98 submitted systems from all over the world).
Emotional State Recognition Based on Speech Signal Analysis
Čermák, Jan ; Atassi, Hicham (referee) ; Smékal, Zdeněk (advisor)
The thesis is focused on the emotional states classification in the Matlab program, using neural networks and the classifier which is based on a combination of Gaussian density functions. It deals with the speech signal processing; the prosodic and spectral signs and the MFCC coefficients were extracted from the signal. The work also deals with the quality evaluation of individual signs of which the most suitable were chosen in order to provide the correct classification of emotional states. In order to identify the emotional states, two different methods were used. The first method of classification was the use of neural networks with differently selected parameters, and the second method was the use of the Gaussian mixture model (GMM). In both methods, a database of emotional utterances was divided into the training group and the test group. The testing was based on a method independent of the speaker. The work also includes the comparison of individual analyzed methods as well as the representation and comparison of the results. The conclusion comprises a proposition for the best parameters and the best classifier for the recognition of the speaker’s emotional state.

Interested in being notified about new results for this query?
Subscribe to the RSS feed.