Tristan GreeneEditor, Neural by TNW
Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: He/him
IBM today announced the launch of its Adversarial Robustness Toolbox for AI developers. The open-source kit contains everything a machine learning programmer needs to attack their own deep learning neural networks (DNN) to ensure they’re able to withstand real-world conditions.
The toolbox comes in the form of a code library which includes attack agents, defense utilities, and benchmarking tools that allow developers to integrate baked-in resilience to adversarial attacks. The company says it’s the first of its kind.
According to IBM Security Systems CTO Sridhar Muppidi:
One of the biggest challenges with some of the existing models to defend against adversarial AI is they are very platform specific. The IBM team designed their Adversarial Robustness Toolbox to be platform agnostic. Whether you’re coding/developing in Keras or TensorFlow, you can apply the same library to build defenses in.
It’s like a mixed martial arts trainer for AI that assesses a DNN’s resilience, teaches it customized defense techniques, and provides a sort of internal anti-virus layer. That last one might not be standard practice in boxing gyms, but it’s absolutely crucial to DNNs.
Adversarial attacks are perpetrated against DNNs by bad actors hoping to disrupt, re-purpose, or deceive an AI. They’re carried out in a number of ways ranging from physical obfuscation to counter-AI in the form of machine learning attacks against a DNN.
If the idea that AI has to defend itself against an opponent capable of learning isn’t scary enough, the potential for danger to humans is absolutely terrifying.
In China, facial recognition software is an integral part of the country’s law enforcement tech, including AI-equipped CCTV cameras capable of picking out a single face in a crowd of more than 60,000 people. The western world is likely to follow suit as AI becomes more capable.
TNW reported earlier this year on the speech system vulnerability pictured in the above image, explaining that fooling speech-to-text systems meant bad news for voice assistants. Hackers don’t necessarily have to rely on you selecting a song from your favorite playlist, they could simply sit across from you on public transportation, or in an office, and pretend to listen to a track themselves, or just play silence with the offending signals embedded.
These threats also include spoofing GPS to misdirect vessels, hacking shipboard systems, and disguising vessel IDs in order to fool AI-powered satellites. As more global AI systems come online, the potential for state-sponsored actions against military vessels is also becoming an increasing reality. Last year US Navy leaders found themselves answering questions about a series of mysterious collisions at sea, and the idea of adversarial systems attacks came up more than once.
Other areas where AI systems are particularly vulnerable include driverless cars and military drones, both of which could be weaponized by hackers if their security were compromised. Realistically, all DNNs need to be resilient to attack or they’re about as useful as a computer without antivirus protection.
For more information on IBM’s new Adversarial Robustness Toolbox you can check out the company’s blog post.
The Next Web’s 2018 conference is just a few weeks away, and it’ll be ??. Find out all about our tracks here.
Get the TNW newsletter
Get the most important tech news in your inbox each week.