National Repository of Grey Literature 74 records found  previous11 - 20nextend  jump to record: Search took 0.00 seconds. 
Ethical Hacking and Cyber Security in Nigeria Telecommunication Industry: Issues and Solution
Akinyemi, Adetunji Akinfemiwa ; Střítecký, Vít (advisor) ; Špelda, Petr (referee)
Ethical hacking and cyber security are crucial topics in today's increasingly digital world. The Nigerian telecommunication industry is no exception and must take measures to secure its information systems. This study examined the issues and solutions related to ethical hacking and cyber security in the Nigerian telecommunication industry. A descriptive and inferential study design was adopted in this study. Data was collected from a primary source using a self- administered questionnaire. The study involved 62 participants from the Nigerian telecommunication industry. The study's findings revealed that the Nigerian telecommunication industry faces various issues related to ethical hacking and cyber security, such as lack of technical expertise, insufficient budget allocation, and inadequate cyber security policies. However, the industry addresses these issues by investing in employee cyber security training and certification, increasing budget allocation, and implementing strict cyber security policies and guidelines. In conclusion, the Nigerian telecommunication industry recognizes the importance of ethical hacking and cyber security and is addressing its challenges. The study highlights the need for the industry to continue investing in cybersecurity measures and to stay updated with the latest...
How GPT-3 Can Augment Disinformation Campaigns
Saffel, William ; Střítecký, Vít (advisor) ; Špelda, Petr (referee)
This dissertation seeks to explore how artificial intelligence, and the natural language processor GPT-3 in particular, can be used to augment disinformation campaigns. As disinformation campaigns grow in complexity and are used regularly in modern conflicts, and as artificial intelligence grows in capability and accessibility, it is becoming a more plausible method of augmenting these campaigns. In this exploratory case study, I will examine two cases of disinformation campaigns in the Ukrainian War - the disinformation campaign around Nazism in Ukraine and the Bucha Massacre. Each case is analyzed through the lens of tasks that GPT-3 can perform. This dissertation finds that it AI indeed has a high potential for augmenting disinformation campaigns in various ways. It finds that narratives can be distilled into "narrative bullet points" which can be a useful and effective tool for training GPT-3 to be more effective at creating disinformation.
Failure Modes of Large Language Models
Milová, Soňa ; Špelda, Petr (advisor) ; Střítecký, Vít (referee)
Failure Modes of Large Language Models Soňa Milová Abstract Diploma thesis "The failure modes of Large Language Models" focuses on addressing failure modes of Large Language Models (LLMs) from the ethical, moral and security point of view. The method of the empirical analysis is document analysis that defines the existing study, and the process by which failure modes are selected from it and analysed further. It looks closely at OpenAI's Generative Pre-trained Transformer 3 (GPT-3) and its improved successor Instruct Generative Pre-trained Transformer (IGPT). The thesis initially investigates model bias, privacy violations and fake news as the main failure modes of GPT-3. Consequently, it utilizes the concept of technological determinism as an ideology to evaluate whether IGPT has been effectively designed to address all the aforementioned concerns. The core argument of the thesis is that the utopic and dystopic view of technological determinism need to be combined with the additional aspect of human control. LLMs are in need of human involvement to help machines better understand context, mitigate failure modes, and of course, to ground them in reality. Therefore, contextualist view is portrayed as the most accurate lens through which to look at LLMs as it argues they depend on the responsibilities,...
Treating social media platforms as public utility: the case of the DSA package
Rybnikár, Jakub ; Špelda, Petr (advisor) ; Střítecký, Vít (referee)
Social media platforms have become deeply entrenched in contemporary social reality. For this, there has been a surge in scholarship investigating the numerous harms and risks such technoscientific artifacts pose to society. To tackle the risks, the European Union has put forward a set of policy initiatives and legislative proposals that ought to provide a comprehensive response to the increasingly fragile security environment. Despite recent efforts to take on this emerging security threat, there has been very little theoretical and empirical scholarship regarding the intersection between security, technology and law. One of the most intriguing, yet heavily understudied, areas of this intersection is the conceptual understanding of social media platforms. Based on recent insights from security, media and legal scholars, this thesis seeks to introduce a new agenda to the discipline of security studies by applying a novel concept, it being public utility, on social media platforms and thus producing crucial empirical evidence. Utilizing the multiple streams framework, the thesis performs a qualitative content analysis on the EU stakeholders' contributions to the European Commission consultation on the Digital Services Act package. The analysis of the selected texts reveals a significant overlap...
Safe and Secure High-Risk AI: Evaluation of Robustness
Binterová, Eliška ; Špelda, Petr (advisor) ; Střítecký, Vít (referee)
The aim of the thesis is to examine Invariant Risk Minimization (IRM) as an existing method for achieving model robustness and assess whether it could potentially serve as means for conformity assessment in the emerging legislative framework of the European Artificial Intelligence Act. Research shows that many cases of erroneous performance in AI systems are caused by machine learning models lacking robustness to changes in data distributions and thus being unable to properly generalize to new environments. In order to achieve reliable performance, the models must exhibit a certain level of robustness to these changes. IRM is a relatively new method designed to achieve such outcomes. This is very much in alignment to the objectives of the EU AI Act that aims for trustworthy AI. The thesis thus examines the congruence of the IRM method and the requirements in the EU AI Act and asks whether IRM can serve as a universal method for ensuring safe and secure AI compliant with European legal requirements through the analysis of existing empirical and theoretical results.
The End of Pluralism in Hungary: Hungarian Propaganda in Action
Nagy, Kitti ; Střítecký, Vít (advisor) ; Špelda, Petr (referee)
This thesis studies the extent of propaganda in Hungary over a 6 years period, from 2016 to 2021. To examine the propaganda, an unsupervised machine learning, topic modeling was used to analyze 6 years of news articles from Magyar Nemzet. The analysis revealed the trending topics over the years, and also interesting combinations of keywords in the articles. During the first 3 years, the newspaper was independent, which changed in 2018 when, after its ownership change, it became openly pro-government, and started disseminating propaganda. The thesis demonstrates the current state of the Hungarian media, where most of the news agencies got acquired by the government. The analysis of Magyar Nemzet reveals that propaganda is used as a tool to strengthen the position of Viktor Orbán and to justify the government's actions.
Beyond the algorithms: Evaluating the risks of deploying machine learning in domestic counterterrorism: A comparison between predictive policing and counterterrorism activities
Bicknese, Emma Lisa ; Špelda, Petr (advisor) ; Kaczmarski, Marcin (referee)
Over the last decades, Machine Learning (ML) has been implemented in nearly every part of our daily lives. Whereas this development was heavily discussed in the area of predictive policing (PP), there was little public debate when it comes to ML's implementation in domestic counterterrorism (CT). This is due to the fact that the counterterrorism domain is a very non- transparent field. Classified information forms an obstacle to proper scholarly analysis. The thesis aims to contribute to the public debate on the implementation of ML in CT by asking the following research question: examining critiques provided by scholars on predictive policing, what are the risks of deploying machine learning tools in domestic counterterrorism? A comparative case study method supplemented by scenario-building allows for an analysis of the risks of ML/CT. More specifically, by using PP arguments as a proxy for CT, arguments can be made to show that the technical and socio-technical risks, in most cases, also hold for counterterrorism tools. The analysis highlights those risks by exploring PP arguments for three CT instruments: individual risk assessments, biometric tools (most notably facial recognition technology), and general models that predict details of future terrorist attacks. It was found that only the last...
Security as Simulacra: Surveilling gendered bodies and constructing security out of computation
Shlifer, Hallie Quinn ; Kaczmarski, Marcin (advisor) ; Špelda, Petr (referee)
Student #: 60106041 Abstract Artificial Intelligence has become ubiquitous across the field of security and defence, especially in applications alongside surveillance. In this Dissertation I interrogate the question of how, through application and out of discourse AI impacts the securitization of Muslim women's veiling practices within the EU. I argue that the discourse constructed around the use of AI, through official documents written and commissioned by the various EU bodies, forms a cohesive body. Using discourse analysis, I trace the patterns throughout the documents and connect them to manifestations in rhetoric on the wearing of veils in public at the EU institutional level. Through this method, I conclude that the discourses of AI surveillance manifest in the way laws are applied to women who wear Islamic veils, and their identification as a security threat to Europe as a both a political and conceptual unit. Furthermore, finding that the discourses of security and science intermingle to form a rigid notion of risk which further corners women who are already marginalized by European security frameworks.
Framing artificial intelligence: The interplay between AI policies and security in the European Union
Leuca, Stefania Bianca ; Peacock, Timothy (advisor) ; Špelda, Petr (referee)
Artificial Intelligence (AI) is increasingly more embedded into our lives. Hence, the literature that explores the new technology is vast. However, there is a lack of resources that address how the technology is framed at the level of the European Union (EU). Specifically, few studies assess whether there are differences between the institutions' framing of AI policies. Scholars also overlook the potential implications of AI for the security of the Union. The present study seeks to fill in these gaps by examining how the European Commission (EC) and the European Parliament (EP) frame AI security policies. The dissertation also investigates whether there are differences between the institutions in how AI security policies. To do this, the research is split into two main sections. The first section explores how the two institutions frame AI security using a combination of the Policy Framing approach and qualitative content analysis. The unique research design was used on 10 official documents released by the EC and EP between 2017 and 2021. On the one hand, the outcome indicates that the EC frames AI policies through the perspective of three security areas, namely economic, social, and political. On the other hand, the EP's framing of AI policies considers the same areas of security while also adding...
Total ban or responsible use? A policy survey to better regulate the use of AI-powered video surveillance in law enforcement in the European Union
Rolland, Apolline Mireille Karin ; Špelda, Petr (advisor) ; Glouftsios, Georgios (referee)
AI-powered video surveillance is a heated issue in the European Union which has given rise to a very polarised debate. On the one hand, proponents advocate for its ability to make cities safer and better protect people. On the other hand, opponents are concerned about the technology's threat to fundamental rights and individual freedoms, such as the right to privacy or fear of the risk of discrimination. European institutions have started attempts at regulating the technology but have so far been struggling with the development of a broad regulation that accounts for the diversity of applications found in AI-powered video surveillance, protects citizens, and encourages innovation at the same time. This dissertation therefore investigates how to implement responsible use of AI-powered video surveillance for predictive policing purposes. To do so, the analysis is divided into two parts which correspond to the two main branches of application of AI-powered video surveillance: object-centred and person-centred AI-powered video surveillance. It first uses securitisation theory to situate the debate. Next, it uses Document Analysis and coding to analyse the qualitative data. The qualitative data encompasses policy and technical documents to allow for a nuanced approach to the issue that accounts for the...

National Repository of Grey Literature : 74 records found   previous11 - 20nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.