Marcel den Bosch, Lead Data Scientist
The importance and popularity of artificial intelligence (AI) has seen a great rise in recent years. However, the successful creation and application of AI models require significant investment to get it right and to harvest the business benefits. While most recognize the required investment, protecting it – which is like other intellectual property – is all too often not an obvious objective.
In this blog post, I describe reverse engineering deep neural networks to capture the underlying model definitions and the potential synthetic recreation of large (proprietary) training datasets, otherwise known as a model extraction attack.
To protect AI models, we first need to understand where the costs for developing them come from. The successful development of AI models requires the following costly ingredients:
Large companies that depend heavily on an AI-centered business strategy can spend millions of dollars every year on the above ingredients.
Models relating to image recognition and text analysis, for example, require a significant multi-million investment. Offered by several big cloud providers typically using a very cheap pay-per-use pricing model, they are usually made available through REST API’s or in some other form of a micro-service for easy integration in applications.
While not all organizations want to expose their models to the public, we see that most organizations pursue a micro services-oriented approach to exposing AI and analytics models to the ecosystem of business applications. At first, this would seem like a very safe and well-defined approach for integration since it would shield the actual model definitions – the intellectual property that resulted from investing in expertise, algorithms, data and computing power – from curious eyes and protect them from being copied, stolen or used elsewhere.
As the challenge of reverse engineering is a long-term area of personal interest to me, I have been researching different approaches for reverse engineering deep learning models. The easiest approach is to access the model definition itself (i.e., the files), then analyze the many deep-layered neural networks and approximate the behavior of layers, neurons and weights.
Another approach – which most organizations are currently not considering to be a risk – uses the model’s interface/API, which describes the input parameters and predicted output, to remotely reverse engineer the model. By carefully crafting a model extraction attack that follows an iterative process of preparing very specific input requests to the AI model and learning from the outcomes it would theoretically be possible to approximate the AI model’s behavior. In time, this approach could ‘relearn’ the model and reconstruct the deep neural network and its trained weights. Quality and granularity would depend on the effort and duration of this attack, but interesting results have been achieved with limited effort and costs.
Other researchers have found an attacker can leverage training data leakage to synthetical recreate the (proprietary) training data using generative techniques. This might not even be considered stealing in a legal sense since the attacker paid a small amount for obtaining his prediction results. In many countries, reverse engineering is allowed by law, if someone is in legal possession of the relevant artifacts.
While the attacks described above are currently feasible and mostly focused on stealing intellectual property, this new line of thinking opens a Pandora’s box of future threads.
With our society becoming more and more dependent on AI technology, combining present-day cybersecurity risks with the new capabilities – namely reverse engineering and possibly even the manipulation of deep neural networks – brings the fictional scenarios from the famous movie ‘Inception’ one step closer to reality.
By delving many layers deep and making small and deliberate changes, hackers could alter AI decisions. One day in the future, a small and careful manipulation of AI models for high-frequency stock trading or fraud detection could potentially pull-off the greatest bank robbery in history!
As I described at the start of this post, the importance and popularity of AI has seen a great rise in recent years. With that in mind, it’s imperative that organizations act now to protect their AI minds. There are three things they need to do:
At Atos we are following these new developments closely and are working on strategies to protect our customer’s AI assets. For example, we are putting in place the right encryption, access control measures and tighter control of the model interface specification to greatly reduce the effect of this form of attack. We are already helping our customers take their own first steps, working with them to realize the value of their AI assets, then design their full-fledged AI (security) strategies and appropriate securities to tackle these risks.
There’s no time to wait, so let’s start talking about how to guard your AI mind!
Provide your email address in order to reveive the brochure of the event.
Geplaatst op: 29 januari 2019
Register your email address to receive your brochure by email.
Did you receive an IIR brochure before?
Please keep me updated on related content, trainings and conferences.
Wij gebruiken cookies om IIR.nl gemakkelijk te maken. Bezoekt u onze website, dan gaat u akkoord met deze cookies meer informatie Accepteren
De cookie-instellingen op deze website zijn ingesteld op 'toestaan cookies "om u de beste surfervaring mogelijk. Als u doorgaat met deze website te gebruiken zonder het wijzigen van uw cookie-instellingen of u klikt op "Accepteren" hieronder dan bent u akkoord met deze instellingen.