top of page

Machine Learning Security in Practice

Published in:

We co-authored a paper on the occurrence of attacks on AI systems which was accepted at IEEE Transactions on Information Forensics and Security.



The Challenge


A large body of academic work focuses on machine learning security or adversarial machine learning (AML). These works investigate how machine learning (ML) can be circumvented and exploited by an attacker.


For example, an attacker can tamper with the training data, yielding a model inferior in performance or that is sensitive to attacker-specified, small parts of the input. Alternatively, the attacker slightly alters test data to change the output of an ML model. In addition, an ML model may leak the used training data or can easily be copied when freely exposed.


Many of the settings studied in ML security can be criticized for being rather artificial. However, already these settings are hard to solve. One possible cause is that even though the current usage of ML in security and threat modeling has been criticised, there is little work on ML security in the real world.


QuantPi’s Contribution:


QuantPi’s Head of Policy & Grants, Lukas Bieringer, co-authored a paper on machine learning security in industry. Together with Kathrin Grosse from EPFL, Tarek Besold from Eindhoven University of Technology, Katharina Krombholz from CISPA Helmholtz Center of Information Security and Battista Biggio from University of Cagliari, a quantitative study was conducted with 139 industrial practitioners.


The authors analyzed attack occurrence and concern and evaluated statistical hypotheses on factors influencing threat perception and exposure. The results shed light on real-world attacks on deployed machine learning.


On the organizational level, while no predictors for threat exposure were found in the sample, the amount of implement defenses depended on exposure to threats or expected likelihood to become a target. The research provides a detailed analysis of practitioners’ replies on the relevance of individual machine learning attacks, unveiling complex concerns like unreliable decision making, business information leakage, and bias introduction into models.


More Information:

bottom of page