This article was published on June 17, 2019

The unforeseen trouble AI is now causing

AI will change the world as we know it, but adversarial data poses a number of problems.


The unforeseen trouble AI is now causing

AI has come a long way in recent years — but as many who work with this technology can attest, it is still prone to surprising errors that wouldn’t be made by a human observer. While these errors can sometimes be the result of the required learning curve for artificial intelligence, it is becoming apparent that a far more serious problem is posing an increasing risk: adversarial data.

For the uninitiated, adversarial data describes a situation in which human users intentionally supply an algorithm with corrupted information. The corrupted data throws off the machine learning process, tricking the algorithm into reaching fake conclusions or incorrect predictions.

As a biomedical engineer, I view adversarial data as a significant cause for concern. UC Berkeley professor Dawn Song notably tricked a self-driving car into thinking that a stop sign says the speed limit is 45 miles per hour.

A malicious attack of this nature could easily result in a fatal accident. Similarly, compromised algorithms could lead to faulty biomedical research, endangering lives or delaying life-saving innovations.

Adversarial data has only recently begun to be recognized for the threat it is — and it can’t go overlooked any longer.

How does adversarial data occur?

Interestingly, adversarial data output can occur even without malicious intent. This is largely because of the way algorithms can “see” things in the data that we humans are unable to discern. Because of that “visibility,” a recent case study from MIT describes adversarial examples as “features” rather than bugs.

In the study, researchers separated “robust” and “non-robust” characteristics during AI learning. Robust features are what humans typically perceive, while non-robust features are only detected by AI. An attempt at having an algorithm recognize pictures of cats revealed that the system was looking at real patterns present in the images to draw incorrect conclusions.

The misidentification occurred because the AI was looking at an apparently imperceivable set of pixels that led it to improperly identify photos. This caused the system to be inadvertently trained to use misleading patterns in its identification algorithm.

These non-robust characteristics served as a type of interfering “noise” that led to flawed results from the algorithm. As a result, for hackers to interfere with AI, they often simply need to introduce a few non-robust characteristics — things that aren’t easily identified by human eyes, but that can dramatically change AI output.

Potential consequences of adversarial data and Dark AI

As Moazzam Khan noted at Security Intelligence, there are two main types of attacks that rely on adversarial data: poisoning attacks, in which “the attacker provides input samples that shift the decision boundary in his or her favor,” and evasion attacks, in which “an attacker causes the model to misclassify a sample.”

In a biomedical setting I’d be familiar with, for example, attacks could induce an algorithm to incorrectly label harmful or contaminated samples as clean and benign. This can lead to misguided research results or incorrect medical diagnoses.

Learning algorithms can also be used to fuel malicious AI programs that are specifically designed to aid hackers. As the Malicious AI Report notes, hackers can use AI to facilitate their attacks on various organizations, allowing for a wider variety of attacks to be implemented.

Machine learning can be especially efficient at bypassing unsecured IoT devices, giving hackers an easier avenue to stealing confidential data, falsely manipulating company databases and more. Essentially, a “dark AI” tool could be used to infect or manipulate other AI programs with adversarial data. SMBs — small to medium sized businesses — are often at a higher risk of these attacks because they do not have as advanced of cybersecurity metrics.

Protective measures

Despite these issues, adversarial data can also be used for good. Indeed, many developers have begun using adversarial data to uncover system vulnerabilities on their own, allowing them to implement security upgrades before hackers can take advantage of the weakness. Other developers are using machine learning to create AI systems that are more adept at identifying and eliminating potential digital threats.

As Joe Dysart explains in an article for Genetic Engineering & Biotechnology News, “Many of these AI tools are able to look for suspicious activity on a computer network, analyze that activity often in milliseconds and neutralize its cause — usually originating from a rogue file or program — before it can do any damage.”

He continues, “This approach differs from traditional IT security, which has been focused more on identifying specific files and programs known to bear threats — rather than studying how those files and programs behave.”

Of course, improvements in machine learning algorithms themselves can also help reduce some of the risks presented by adversarial data. What is most important, however, is that these systems aren’t left completely on their own. Manual input and human supervision remain essential for identifying discrepancies between robust and non-robust characteristics to ensure that a corrupted reading doesn’t lead to flawed outcomes. Additional training that utilizes real correlations can further reduce AI’s vulnerability.

It is clear that adversarial data will continue to pose a challenge in the immediate future. But in an era when AI is being used to help us better understand the human brain and solve a variety of world problems, the importance of addressing this data-driven threat cannot be understated. Dealing with adversarial data and taking steps to counter dark AI should become one of the tech world’s top priorities.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with