This article was published on November 19, 2019

We can’t just regulate — we must teach our AIs values

The creation of intelligent machines demands teamwork


We can’t just regulate — we must teach our AIs values

For the first time, humans are creating machines that can learn on their own. The era of intelligent technology is here, and as a society, we are at a crossroads. Artificial intelligence (AI) is affecting all parts of our lives and challenging some of our most firmly rooted beliefs. Powerful nations, from Russia to China to the United States, are competing to build AI that is exponentially smarter than we are and algorithmic decision systems are already echoing the biases of their creators.

There are numerous regulatory bodies, non-governmental organizations, corporate initiatives, and alliances attempting to establish norms, rules, and guidelines for preventing misuse and for integrating these advancing technologies safely into our lives. But it’s not enough.

As a species, our record of translating our values into protections for each other and for the world around us has been mixed. We have made some remarkable progress creating social contracts that enshrine our ethics and values, but often have been less successful at enforcing them. 

The international human rights framework is our best modern example of an attempt to bind all of us as human beings, sharing one home, with the understanding that we are all equally and intrinsically valuable. It’s a profoundly beautiful idea. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Unfortunately, sometimes even when we have managed to enact laws to protect people, we have slid backwards after fights for justice have been hard won, such as Jim Crow laws and post-Emancipation Proclamation segregation. And the human rights documents we drafted in the wake of the Atomic Age did not anticipate the technologies we are developing now, including machines that one day would do our thinking for us. Ever still, I believe that as a human civilization we aspire to arc toward the light.

With respect to fast emerging intelligent technologies, the genie is already out of the bottle. Decreeing standards to govern AI will not be sufficient to protect us from what’s coming. We need to also consider the idea of instilling universal values into the machines themselves as a way to ensure our coexistence with them.

Is this even possible? While extraordinary experiments are underway, including trying to infuse such characteristics as curiosity and empathy into AI, the science of how advanced AI may manage ethical dilemmas, and how it may develop values, ours or their own, remains uncertain. And even if coding a conscience is technologically doable, whose conscience should it be? 

Human value-based choices are dependent upon various complex layers of moral and ethical codes. At a deeper level, the meaning of concepts like “values,” “ethics,” and “conscience,” can be difficult to pinpoint or standardize, as our choices in how to act depend on intersecting facets of societal and cultural norms, emotions, beliefs, and experiences. 

Nonetheless, agreeing upon a global set of principles, ethical signposts that we would want to see modeled and reflected in our digital creations, may be more achievable than trying to enforce mandates to monitor and restrain AI developers and purveyors within existing political systems.

We need to act now

As an international human rights lawyer, I have immense respect for the historic documents we have chartered to safeguard human dignity, agency, and liberty, and I am in accord with many of the current efforts to put human rights at the heart of AI design and development to promote more equitable use. However, the technological watershed confronting us today demands that we go further. And we cannot afford to delay the conversation. 

Unlike other periods of technological revolution, we are going to have very little time to assimilate into the Intelligent Machine Age. Not nearly enough of us are aware of just how big a societal paradigm shift is upon us and how much it is going to affect our lives and the world we will leave for the next generation.

We remain deeply divided politically. For example, we have only been able to muster, to date, modest legislative will to react to such existential threats as the climate disaster; something that most scientists have been informed about for years. 

History teaches us how difficult it will be to regulate and control our technology for the common good. Yet even those on different sides of the aisle generally profess to cherish similar quintessential values of humankind. A fundamental argument for teaching morals and ethics to machines is that right now is that it may be more productive for our leaders and legislators to agree on what values to imprint on our tech. If we are able to come together on this, crafting better policies and guidelines will follow. 

We are already entrusting machines to do much of our thinking; soon they will be transporting us and helping care for our children and elders, and we will become more and more dependent on them. How would imbuing them with values and a sense of equity change how they functioned? Would an empathetic machine neglect to care for the sick? Would an algorithm endowed with a moral code value money over people? Would a machine with compassion demonize certain ethnic or religious groups?

I’ve studied war crimes and genocide. I’ve borne witness to the depths of both human despair and resilience, evil and courage. Humans are flawed beings capable of extraordinary extremes. We have now what may be our most pivotal opportunity to partner with our intelligent machines to potentially create a future of peace and purpose — what is it going to take for us to seize this moment? Designing intelligent technologies with principles is our moral responsibility to future generations.

I agree with Dr. Paul Farmer, founder of Partners in Health, that “the idea that some lives matter less is the root of all that is wrong with the world.” Entrenched partisanship, tribalism, and Other-ism could be our downfall. Looking at someone else as the Other is at the core of conflict, war, and crimes against humanity.

If we fail to build our machines to reflect the best in us, they will continue to amplify our frailties. To progress, let’s work with the tech we know is coming to help us find a way to shed our biases, understand one another better, become more fair and more free.

We need to acknowledge our human limitations, and the interdependent prism of humanity that underlies us all. Ironically, to accept our limits opens us up to go farther than we ever thought possible. To become more creative, more collaborative, to be better collectively than we were before.

The answer to whether we are capable as a species of instilling values into machines is simply: we don’t know for sure yet. But it’s our moral imperative to give it a shot. Failing to try is an ethical choice in itself.

We didn’t know if we could build seaworthy ships to sail the oceans; we didn’t know if we could create an electric current to light up the world; we didn’t know if we could break the Enigma code; we didn’t know if we could get to the moon. We didn’t know until we tried. As Nelson Mandela knew well, “it always seems impossible until it’s done.”

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top