National Repository of Grey Literature 1 records found  Search took 0.01 seconds. 
Safe and Secure High-Risk AI: Evaluation of Robustness
Binterová, Eliška ; Špelda, Petr (advisor) ; Střítecký, Vít (referee)
The aim of the thesis is to examine Invariant Risk Minimization (IRM) as an existing method for achieving model robustness and assess whether it could potentially serve as means for conformity assessment in the emerging legislative framework of the European Artificial Intelligence Act. Research shows that many cases of erroneous performance in AI systems are caused by machine learning models lacking robustness to changes in data distributions and thus being unable to properly generalize to new environments. In order to achieve reliable performance, the models must exhibit a certain level of robustness to these changes. IRM is a relatively new method designed to achieve such outcomes. This is very much in alignment to the objectives of the EU AI Act that aims for trustworthy AI. The thesis thus examines the congruence of the IRM method and the requirements in the EU AI Act and asks whether IRM can serve as a universal method for ensuring safe and secure AI compliant with European legal requirements through the analysis of existing empirical and theoretical results.

Interested in being notified about new results for this query?
Subscribe to the RSS feed.