National Repository of Grey Literature 1 records found  Search took 0.01 seconds. 
Knowledge representation in deep neural networks
Georgiev, Georgi Stoyanov ; Mrázová, Iveta (advisor) ; Pešková, Klára (referee)
Convolutional neural networks (CNNs) are known to outperform humans in numerous image classification and object detection tasks. They also excel at captioning, image segmentation, and feature extraction. CNNs are precise at recognition and generalize well, yet analyzing their decision-making process remains challenging. A means to study their internal knowledge representation provide the so-called heat maps and their variants like the saliency, SmoothGrad, and Grad-CAM maps. The techniques such as t-SNE, UMAP, and ivis can, on the other hand, help visualize the multi-dimensional features formed in different convolutional layers. Inspired by the results obtained when analyzing the capabilities of CNNs, we introduce two novel size-reduction algorithms: Iterative Top Cut and Iterative Feature Top Cut. Both algorithms successively remove the layers of a CNN starting from its top until a stopping criterion is activated. The stopping criteria involve the model's performance and the formed internal knowledge representation. In particular, the Iterative Top Cut method exceeds our expectations by shrinking some models, such as EfficientNetV2S, up to 3.15 times while preserving their accuracy on the Cars-196 dataset. Moreover, the algorithm generalizes well and proves to be stable. 1

Interested in being notified about new results for this query?
Subscribe to the RSS feed.