top of page

Defending API-Based Attacks on ML Models


Our team developed a technological framework to optimize the utility-privacy trade-off in providing confidence scores as model outputs to users.



The Challenge:


Artificial intelligence is now used in many industries and is becoming increasingly relevant to the economy. The way AI systems work is based on mathematical models, which often form the basis for products and entire company concepts. For a trustworthy and secure use of AI systems, it is therefore particularly important to protect these models from manipulation and theft. Threats to the security of such systems are discussed within the research community in terms of a number of specific attack models. Using various methods, third parties can unlawfully copy or manipulate AI models or significantly impair their functionality. This poses economic risks for the affected companies and can have significant negative consequences for those affected individually.


QuantPi’s Contribution:


In this project, a technology is being developed that counteracts the above mentioned threats. Using machine learning and cryptographic methods, AI models are to be defended against several attack variants discussed in scientific discourse. This applies, among other things, to attacks that aim to steal the models, that could cause AI systems to make incorrect decisions or that enable insight into the processed data. The research work in the project aims to achieve the greatest possible protection with the lowest possible impact on the performance of the corresponding AI models.


More Information:

bottom of page