Expert Speak Digital Frontiers
Published on Nov 12, 2018
What if there was a new class of zero days on the horizon? What if, even if it was a known zero day, it wouldn’t necessarily be able to be patched, or even understood?
A new class of zero days and autonomous weapons systems
Read all the curated essays from Raisina Scrolls here  

Almost every week there is a headline about a cyber attack resulting in data theft, disruption to services, or worse, destruction of machines. Zero days play a big role in cyber attacks; they are an unknown vulnerability in a software or hardware that can be manipulated by the attacker to cause harm. Since the victim is not aware of this vulnerability and has zero days to respond, it is called a zero day.

There is no solution to the problem as there is no such thing as 100% security, and the very nature of zero days means that they are unknown until they are known. To mitigate risk, companies employ good practices, harden their network, raise user awareness, continuously update systems, conduct penetration tests, and stay up to speed on the latest threats and trends. Yet, attacks occur (and will continue to occur) because there is no perfect system, and zero days will continue to be found.

One of the many reasons there will continue to be zero days in code, is because the vulnerability that led to a zero day is not always an “error” in the code, but a part of the code that dedicated malicious attackers can use to their advantage in their attacks. If the coders had anticipated that the portions of the code in question would have been vulnerable to certain types of attack they would have fixed it to mitigate those attacks, but because they didn't anticipate their code being vulnerable in the way it was, the vulnerability became a zero day.

However, what if there was a new class of zero days on the horizon? What if this time, even if it was a known zero day, it wouldn’t necessarily be able to be patched, or even understood? Adversarial attacks targeting autonomous systems can be considered as a new class of zero days.

According to OpenAI:

Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines.”

This can have serious consequences, for example, a team of researchers across four universities modified stop signs by writing the words LOVE (above) and HATE (below) the word STOP on the stop sign in efforts to test if self-driving cards could be hacked. While the word STOP was clearly visible and not in any way obstructed, the sensors misidentified it as a speed-limit sign. Research like this helps bolster the defences against adversarial attacks, however, there are limits to how many adversarial examples have been tested. Apart from research on adversarial examples conducted by humans, a machine could create adversarial examples, through the use of a Generative Adversarial Network (GAN).

Using the same zero day logic, adversarial training attempts to train the artificial intelligence algorithm to recognise false positives, and react in the way that it was designed even when hostile actors try to trick the artificial intelligence. But just like with code, the creators of the algorithm and the testers of the algorithm may not have thought of all the ways in which hostile actors can trick the AI.

Ultimately, there are as many adversarial examples as the human brain (this includes the brains of the attackers) can think of; or a generative adversarial network (GAN) can come up with. This means that there will be many occasions where the autonomous system and human team behind it will have zero days to be able to respond to a new and unexpected adversarial example.

What does that mean for autonomous weapons systems? As the development of increasingly autonomous weapons systems continues in the form of drones, swarms, autonomous underwater vehicles, etc., they will be subjected to deception techniques, from peer, near-peer and asymmetric actors which will result in new adversarial attacks. There are methods to harden an artificial intelligence system such as adversarial training (which is similar to a brute force attack of training it with multiple potential attacks), and defensive distillation (a method that explores different output probabilities). However, just like with cyber defence, offence is easier than defense. Cyber defenders face an infinite number of attacks each day and need to be able to successfully defend against each one, while attackers need to be successful with just one attack. In this sense, there is a similar parallel. Both methods (adversarial training and defensive distillation) can harden a system, however it will not be able to respond to every single possibility because it is not adaptive.

While research continues into how to mitigate and manage the risks posed by adaptive attackers, a new class of zero days has become part of the threat landscape.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.