Národní úložiště šedé literatury Nalezeno 3 záznamů.  Hledání trvalo 0.00 vteřin. 
Applicability of Deepfakes in the Field of Cyber Security
Firc, Anton ; Homoliak, Ivan (oponent) ; Malinka, Kamil (vedoucí práce)
Deepfake technology is still on the rise, many techniques and tools for deepfake creation are being developed and publicly released. These techniques are being used for both illicit and legitimate purposes. The illicit usage yields the need for development and continuous improvement of detection tools and techniques as well as educating broad public about the dangers this technology presents. One of the unexplored areas of the illicit usage is using deepfakes to spoof voice authentication. There are mixed opinions on feasibility of deepfake powered attacks on voice biometrics systems providing the voice authentication, and minimal scientific evidence. The aim of this work is to research the current state of readiness of voice biometrics systems to face deepfakes. The executed experiments show that the voice biometrics systems are vulnerable to deepfake powered attacks. As almost all of the publicly available models and tools are tailored to synthesize the English language, one might think that using a different language might mitigate the mentioned vulnerabilities, but as shown in this work, synthesizing speech in any language is not that complicated. Finally measures to mitigate the threats  posed by deepfakes are proposed, like using text-dependent verification because it proved to be more resilient against deepfakes.
Resilience of Biometric Authentication of Voice Assistants against Deepfakes
Šandor, Oskar ; Firc, Anton (oponent) ; Malinka, Kamil (vedoucí práce)
With the rise of deepfake technology, imitating the voice of strangers has become a lot easier. It is no longer necessary to have a professional impersonator to imitate the voice of a person to possibly deceive a human or machine. Attackers only need a few recordings of a person's voice, regardless of the content, to create a voice clone using online or open-source tools. In that case, he or she can create recordings with content that the person may have never said. These recordings can be misused, for example, for unauthorized use of voice-assistant devices. The aim of this work is to determine whether voice assistants can recognize synthetized recordings (deepfakes). Experiments conducted in this thesis show that deepfakes created in a matter of minutes can spoof speaker recognition in voice assistants and can be used to carry out several attacks.
Applicability of Deepfakes in the Field of Cyber Security
Firc, Anton ; Homoliak, Ivan (oponent) ; Malinka, Kamil (vedoucí práce)
Deepfake technology is still on the rise, many techniques and tools for deepfake creation are being developed and publicly released. These techniques are being used for both illicit and legitimate purposes. The illicit usage yields the need for development and continuous improvement of detection tools and techniques as well as educating broad public about the dangers this technology presents. One of the unexplored areas of the illicit usage is using deepfakes to spoof voice authentication. There are mixed opinions on feasibility of deepfake powered attacks on voice biometrics systems providing the voice authentication, and minimal scientific evidence. The aim of this work is to research the current state of readiness of voice biometrics systems to face deepfakes. The executed experiments show that the voice biometrics systems are vulnerable to deepfake powered attacks. As almost all of the publicly available models and tools are tailored to synthesize the English language, one might think that using a different language might mitigate the mentioned vulnerabilities, but as shown in this work, synthesizing speech in any language is not that complicated. Finally measures to mitigate the threats  posed by deepfakes are proposed, like using text-dependent verification because it proved to be more resilient against deepfakes.

Chcete být upozorněni, pokud se objeví nové záznamy odpovídající tomuto dotazu?
Přihlásit se k odběru RSS.