National Repository of Grey Literature 3 records found  Search took 0.01 seconds. 
Creating Adversarial Examples in Machine Learning
Červíčková, Věra ; Pilát, Martin (advisor)
This thesis examines adversarial examples in machine learning, specifically in the im- age classification domain. State-of-the-art deep learning models are able to recognize patterns better than humans. However, we can significantly reduce the model's accu- racy by adding imperceptible, yet intentionally harmful noise. This work investigates various methods of creating adversarial images as well as techniques that aim to defend deep learning models against these malicious inputs. We choose one of the contemporary defenses and design an attack that utilizes evolutionary algorithms to deceive it. Our experiments show an interesting difference between adversarial images created by evolu- tion and images created with the knowledge of gradients. Last but not least, we test the transferability of our created samples between various deep learning models. 1
Detecting Misleading Features in Data Visualization
Roubalová, Hana ; Vomlelová, Marta (advisor) ; Červíčková, Věra (referee)
This thesis explores the identification and detection of misleading elements in data visu- alizations. The theoretical portion focuses on understanding various types of misleading features commonly encountered in scientific figures and recognizing them. The imple- mentation introduces an application designed to detect colorblind-unfriendly graphs with the analysis of various algorithms. The thesis raises awareness about misleading visual- izations and demonstrates how software can simplify the detection of misleading features for the everyday user. This thesis highlights the importance of addressing misleading features in data visualizations and introduces an application to assist in their detection. The study advances our understanding of this field and offers insights into reducing the negative effects of misleading data visualizations. 1
Knowledge Extraction with Deep Belief Networks
Bronec, Jan ; Mrázová, Iveta (advisor) ; Červíčková, Věra (referee)
Deep Belief Networks (DBNs) are multi-layered neural networks constructed as a series of Restricted Boltzmann Machines stacked on each other. Like several other types of neural networks, increasing the size of a DBN will generally improve its performance. However, this comes at the cost of increased computational complexity and memory requirements. It is usually necessary to reduce a deep neural network's size to deploy it on a mobile device. To address this issue, we focus on a size-reduction technique called pruning. Pruning aims to zero out a large portion of the network's weights without significantly affecting its accuracy. We apply selected pruning algorithms to DBNs and evaluate their performance on both grayscale and color images. We also investigate the performance of the so-called confidence rules extracted from a trained DBN. These rules offer a knowledge representation that is easy to interpret. We investigate whether they also provide an accurate low-cost alternative to the original network. 1

Interested in being notified about new results for this query?
Subscribe to the RSS feed.