Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on February 16, 2019

3 types of AI that represent our fears for the future


3 types of AI that represent our fears for the future

The AI icon Andrew Ng is quoted as saying, “Artificial intelligence is the new electricity.”  For anyone reading this, that’s a powerful analogy. Electricity empowers our surge in science and globalization over the past 100+ years, but the analogy falters in one overlooked regard.  

Few of us know how the electric revolution began, and like all revolutions, it was dirty. Many are unaware, but those of us who forget history are condemned to repeat it.

The sins of electricity are buried in the history books. You might know a fraction about the feud between Tesla and Edison. Fewer discuss the fires of houses with poor electrical conduit and insulation, and even fewer are told about Topsy the elephant, who was electrocuted for entertainment with 6,600 volts before a crowd.  

To us, those are sins of a revolution past, but the sins of our “new electricity” have yet to occur. Elon Musk has been quoted telling governors that AI poses an “existential risk” to humanity.  It’s our duty to be wiser, stronger, and more thorough, as the global risks of AI cause the stakes to be at an all-time high. Know what terrors we should we keep our eyes open for, and know those that are already here.

Risk 1: Militarized AI

Plan it, build it, blow it up — the stories of AI in the military have fueled classic movies like War Games, and even given rise to some of the biggest cautionary mistakes of our time. Neil Fraser wrote about an alleged attempt to use neural networks in the 1980s to identify enemy tanks, where the input data of enemy tanks versus trees were taken on two different days.  

The final result? The neural network would attack trees on overcast days, due to data bias. This story has been told in many outlets as a cautionary tale, but several decades later we find ourselves surrounded by highly funded killing machines and a foot on the AI accelerator.  

“Killer bots” isn’t a cautionary tale or a Hollywood feature, it’s world news.  China is assigning their brightest children for their AI weapons development program. The US, China, and many other nations are now racing to develop deadly AI applications.  It’s hard to think of something more dangerous than a global nuclear war, but the top governments of the world are recruiting, incentivizing, and developing ideas for applying just that.  The US is recruiting services from top companies like Microsoft, which is causing extreme unrest inside those companies.  

Risk 2: Cyber attack AI

Less frightening, but also something you may not have considered, as our world depends more on technology, the military and civil application of AI can spill into cyber attacks as well.  Lots of computer viruses are programmed by smart people who can teach the software how to hide on most systems. Part of how we detect new worms and viruses is specifically seeing if they attack or act in a specific distinguishable way.  

For instance, some trojan horses will go dormant during common “work hours” to avoid detection, and then activate later when they are unlikely to be observed. What if rather than being programmed, a cyber attack could learn and adapt?

Adaptable AI will be key in cyber defense to prevent the influx of weaponized cyber attacks.  Darktrace uncovered several styles of attack, and identified that hardcoded thresholds for detecting attacks is something of the past. We’ll need intelligent cybersecurity to stay ahead of blackhat in the world of AI, or we will see a new influx of advanced computer infections.

Risk 3: Manipulative AI

In the next 10 years, you will be able to call for help, in chat or on phone, have a conversation, and NEVER know if you spoke with a human or a bot. That may sound crazy, but current machine learning is capable of generating 100 percent AI news anchors with near-believable voice and visuals. The explosion of generative AI has only recently started to surface, so it’s fair to believe that in 10 years, or fewer, we will have AI managing human interactions.

Matt Chessen writes about the emergence of such technology and terms them MADCOMs (machine-driven communication tools). Imagine an influential political pundit and multiply it by 100. Using your profile, your online fingerprint, and advanced psychology a MADCOM could speak directly to your personal interests in a form of propaganda that’s never been seen.

Computational propaganda is already a growing term for social media manipulation via big data, but as the line blurs between people and machines online, the authenticity for making an opinion seem accepted and backed by many will become indistinguishable from a purchased MADCOM hype. “Pliable reality will become the norm,” writes Chessen.

US Congress has already reviewed the Countering Foreign Propaganda and Disinformation Act, but as the AI revolution evolves, we might see a stronger call to secure the clarity of information, which up until recently has been provided solely by humans.

So… what should we take away from this?

The unknown has always been a generator of fear for humanity. Despite the risks listed above, there’s always the simplest risk of all, that we don’t even see it coming. Throngs of developers are working in part, like a multicellular organism, and organizing their uploads to the cloud, a single host that doesn’t need us when we’re done.  

AI is the first initiative that, should we succeed, we will create something smarter than ourselves.  Initiatives like OpenAI.com aren’t trying to create algorithms that identify obscenity, mood, or plans of action. It’s trying to solve the question of general intelligence. Most people’s inclination is to hit the brakes and try to mitigate the risks, but that ship has sailed.  

The smartest thing any of us can do is to educate ourselves. The old adage “keep your enemies closer” rings true, because if only a few large companies are heading up AI research, then they alone, wittingly or not, will control the fate of AI for us all.  Studying AI is the best risk mitigation we have as we hopefully “speak up” and steer this revolution of “new electricity.”

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top