Národní úložiště šedé literatury Nalezeno 4 záznamů.  Hledání trvalo 0.01 vteřin. 
Methods for Realtime Voice Deepfakes Creation
Alakaev, Kambulat ; Pleško, Filip (oponent) ; Malinka, Kamil (vedoucí práce)
This thesis explores the possibility of achieving real-time voice deepfake generation using open-source tools. Through experiments, it was discovered that the generation rate of voice deepfakes is affected by the computing power of the devices running the speech creation tools. A deep learning model was identified to be capable of generating speech in near real time. However, limitations in the tool containing this model prevented continuous input data for real-time generation. To address this, a program was developed to overcome these limitations. The quality of the generated deepfakes was evaluated using both voice deepfake detection models and human online surveys. The results revealed that while the model could deceive detection models, it was not successful in fooling humans. This research highlights the accessibility of open-source voice synthesis tools and the potential for their misuse by individuals for fraudulent purposes.
Support Tools for Verifying Human Ability to Detect Deepfakes
Potančok, Patrik ; Malinka, Kamil (oponent) ; Firc, Anton (vedoucí práce)
The aim of this thesis is to create a web application using PHP and MySQL, that will test the human ability to detect deepfake recordings while collecting their data like date of birth, native language, proficiency in other languages, how many times and how long did they listen to a recording and a number of correct answers. This application includes management of the recordings and the users and the ability to export user data in CSV format. The application was implemented using Laravel, Vue.js and MySQL.
Differential-based deepfake speech detection
Staněk, Vojtěch ; Černocký, Jan (oponent) ; Firc, Anton (vedoucí práce)
Deepfake speech technology, which can create highly realistic fake audio, poses significant challenges, from enabling multi-million dollar scams to complicating legal evidence's reliability. This work introduces a novel method for detecting such deepfakes by leveraging bonafide speech samples. Unlike previous strategies, the approach uses trusted ground truth speech samples to identify spoofs, providing critical information that common methods lack. By comparing the bonafide samples with potentially manipulated ones, the aim is to effectively and reliably determine the authenticity of the speech. Results suggest that this innovative approach could be a valuable tool in identifying deepfake speech, especially recordings created using Voice Conversion techniques, offering a new line of defence against this emerging threat.
Assessing the Human Ability to Recognize Synthetic Speech
Prudký, Daniel ; Malinka, Kamil (oponent) ; Firc, Anton (vedoucí práce)
This work responds to the development of artificial intelligence and its potential misuse in the field of cybersecurity. It aims to test and evaluate the human ability to recognize a subset of synthetic speech, called voice deepfake. This paper describes an experiment in which we communicated with respondents using voice messages. We presented the respondents with a cover story about testing the user-friendliness of voice messages while secretly sending them a pre-prepared deepfake recording during the conversation and looked at things like their reactions, their knowledge of deepfakes, or how many respondents correctly identified which message was manipulated. The results of the work showed that none of the respondents reacted in any way to the fraudulent deepfake message and only one retrospectively admitted to noticing something specific. On the other hand, a voicemail message that contained a deepfake was correctly identified by 96.8% of respondents after the experiment. Thus, the results show that although the deepfake recording was clearly identifiable among others, no one reacted to it. And so the whole thesis says that the human ability to recognize voice deepfakes is not at a level we can trust. It is very difficult for people to distinguish between real and fake voices, especially if they are not expecting them.

Chcete být upozorněni, pokud se objeví nové záznamy odpovídající tomuto dotazu?
Přihlásit se k odběru RSS.