Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on March 15, 2020

We should treat AI like our own children — so it won’t kill us


We should treat AI like our own children — so it won’t kill us

Are you ready for Skynet? How about the Holodeck-meets-Skynet universe of Westworld (returning March 15 to HBO)? What about synths destroying the colonies of Mars as seen in Picard? With so much fiction bleeding apocalyptic images of artificial intelligence (AI) gone wrong, we’ll take a look at some possible scenarios of what could actually happen in the rise of artificial intelligence.

While many researchers and computer experts aren’t worried, new technologies need risk-assessment. So what’s the risk of AI breaking bad and turning into an episode of Westworld? The consensus is mixed. But, some high profile scientists like Elon Musk and the late Stephen Hawking sounded the alarm years ago, and there is some reason for concern.

Westworld — a gripping story of artificial intelligence gone bad — returns for its third season on HBO March 15. Image credit: HBO/Westworld

Deaths have already occurred and will continue to occur from both robots and artificial intelligence, but these are accidental. Whether it’s self-driving cars, assembly-line robotic-arms, or even older technologies like airplane and automobile malfunctions, deaths related to technological break-downs have been with us for over a century.

I, for one, welcome our new robotic overlords

Many would agree that the benefit from most existing technologies outweighs the risk. Reduced human mortality due to improvements in medicine, safety, and other areas more than offsets any loss of life.

Society does a lot to reduce machine-related deaths, like seat-belt laws, but the benefit is so great that most are willing to accept some loss of life as part of the cost. Still, any loss of life is a tragedy, so there will always be some concern as each field matures. Fear plays an even larger factor.

But what happens when the deaths are no longer accidental? If we’re talking about intentional sabotage and harmful programming, that threat has always existed and will never go away. But what is the likelihood that artificial life could develop sentience? What is the likelihood self-aware AI will go outside their original programming and intentionally harm people?

The short answer is that most scientists believe sentience is possible, but it will need humans to design it that way. Will AI intelligence exceed our own and develop the capability to think for itself? Assuming it does, it still needs to take the next step to harm humans.

Most fear relates to Terminator-style extinction events. I think these, like concerns for advanced alien-life on other planets wiping us out, are overblown. Some may disagree, but intelligent creatures will be more evolved and understand higher concepts like cooperation, trust, and synergy so they will be less likely to kill us.

But even if large-scale extinction is off the table, there is the possibility that individual systems, whether networked or in isolation, could intentionally cause harm. This is conjecture, but I suspect much of this could come from self-preservation, much like backing a human into a corner. But this is true of any living creature, intelligent or otherwise.

Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect. 

― Arthur C. Clarke, 2010: Odyssey Two

Thinking of robots like your own children

What’s the solution? How can society limit the risk associated with rogue AI on smaller scales? The answer lies in shifting perspective. Why do people still have children? They are capable of causing great harm, but we do it anyway. If we begin to think of AI as human, once they achieve sentience, then it’s easier to get a sense of the solution.

Stories of robots striking out against humans have been around since at least 1920, with the play R.U.R., written by Karel Čapek. Public domain image.

There will come a point when society must assess AI for sentience. If they meet that threshold, courts will award them rights. We must expect this and expect to observe, train, and teach them like we do our children. This will be done through programming, laws, and human interaction.

Once society understands this, most companies and developers will put in place safeguards to prevent AI from becoming sentient so they can still use it without those restrictions. But I suspect there will be tests developed to check. Governments will likely regulate developers to help ensure people are honest actors.

But like everything else, failures — both intentional and accidental — are bound to occur. Before long, artificial intelligence will, likely, be advanced enough to develop sentience. The question remains if humans will be intelligent enough to avoid domination by our robotic creations.

This article was originally published on The Cosmic Companion by James Maynard, an astronomy journalist, fan of coffee, sci-fi, movies, and creativity. Maynard has been writing about space since he was 10, but he’s “still not Carl Sagan.” Also, Roy Huff authored this article, and is a Best-selling author, scientist, & teacher. Optimist. Life-long learner. Hawaii resident, book lover and fan of all things science fiction & fantasy. Find out more at royhuff.net. The Cosmic Companion’s mailing list/podcast. You can read this original piece here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with