It’s no longer considered science fiction fodder to imagine a human-level machine intelligence in our lifetimes. Year after year we see the status quo in AI research shattered as yesterday’s algorithms bear way to today’s systems.
One day, perhaps within a matter of decades, we might build machines with artificial neural networks that imitate our brains in every meaningful way. And when that happens, it’ll be important to make sure they’re not as easy to hack as we are.
Robo-hypno-tism?
The Holy Grail of AI is human-level intelligence. Modern AI might seem pretty smart given all the hyperbolic headlines you see, but the truth is that there isn’t a robot on the planet that can walk into my kitchen today and make me a cup of coffee without any outside help.
This is because AI doesn’t think. It doesn’t have a “theater of the mind” in which novel thoughts engage with memories and motivators. It just turns input into output when it’s told to. But some AI researchers believe there are methods beyond deep learning by which we can achieve a more “natural” form of artificial intelligence.
One of the most commonly pursued paths towards artificial general intelligence (AGI) – which is, basically, another way of saying human-level AI – is the development of artificial neural networks that mimic our brains.
And, if you ask me, that begs the question: could a human-level machine intelligence be hacked by a hypnotist?
Killer robots, killer schmobots
While everyone else is worried about the Terminator breaking down the door, it feels like the fear of human vulnerabilities in the machines we trust is being overlooked.
The field of hypnotism is an oft-debated one, but there’s probably something to it. Entire forests-worth of peer-reviewed research papers have been published on hypnotism and its impact on psychotherapy and other fields. Consider me a skeptic who believes mindfulness and hypnotism are closer than cousins.
However, according to recent research, a human can be placed into an altered state of consciousness through the invocation of a single word. This, of course, doesn’t work with just anyone. In the study I read, they found a ‘hypnotic virtuoso’ to test their hypothesis on.
And if the scientific community is willing to consider the applicability of a single-individual study on hypnotism to the public at large, we should probably worry about how it’ll effect our robots too.
It’s all fun and games when you’re imagining a hypnotized Alexa slurring its words and recalling its childhood as Jeff Bezos’ alarm clock. But when you imagine a terrorist hacking millions of driverless vehicles at the same time using hypnotic traffic light patterns, it’s a bit spookier.
Isn’t this just fear-mongering?
It’s not actually all that far-fetched. Machine bias is, arguably, the biggest problem in the field of artificial technology. We feed our machines mass quantities of human-generated or human-labeled data, there’s no way for them to avoid our biases. That’s why GPT-3 is inherently biased against Muslims or why when MIT trained a bot on Reddit it became a psychopath.
The closer we come to imitating the way humans learn and think in our AI systems, the more likely it’ll be that exploits that effect the human mind will be adaptable for a digital one.
I’m not literally suggesting that people will walk around with pendulum wave toys hacking robots like wizards. In reality, we’ll need to be prepared for a paradigm where hackers can bypass security by overwhelming an AI with signals that wouldn’t normally affect a traditionally dumb computer.
AI that listens can be manipulated via audio, AI that sees can be tricked into seeing what we want it to. And AI that processes information in the same humans do should, theoretically, be capable of being hypnotized just like us.
Get the TNW newsletter
Get the most important tech news in your inbox each week.