Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on January 17, 2022

Why giving AI ‘human ethics’ is probably a terrible idea

lol what could go wrong?


Why giving AI ‘human ethics’ is probably a terrible idea

If you want artificial intelligence to have human ethics, you have to teach it to evolve ethics like we do. At least that’s what a pair of researchers from the International Institute of Information Technology in Bangalore, India proposed in a pre-print paper published today.

Titled “AI and the Sense of Self,” the paper describes a methodology called “elastic identity” by which the researchers say AI might learn to gain a greater sense of agency while simultaneously understanding how to avoid “collateral damage.”

In short, the researchers are suggesting that we teach AI to be more ethically-aligned with humans by allowing it to learn when it’s appropriate to optimize for self and when its necessary to optimize for the good of a community.

Per the paper:

While we may be far from a comprehensive computational model of self, in this work, we focus on a specific characteristic of our sense of self that may hold the key for the innate sense of responsibility and ethics in humans. We call this the elastic sense of self, extending over a set of external objects called the identity set.

Our sense of self, is not limited to the boundaries of our physical being, and often extends to include other objects and concepts from our environment. This forms the basis for social identity that builds a sense of belongingness and loyalty towards something other than, or beyond one’s physical being.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The researchers describe a sort of equilibrium between altruism and selfish behavior where an agent would be able to understand ethical nuances.

Unfortunately, there’s no calculus for ethics. Humans have been trying to sort out the right way for everyone to conduct themselves in a civilized society for millennia and he lack of Utopian nations in modern society tells you how far we’ve gotten.

As to exactly what measure of “elasticity” an AI model should have, that may be more of a philosophical question than a scientific one.

According to the researchers:

At a systemic level, there are also open questions about the evolutionary stability of a system of agents with elastic identity. Can a system of empathetic agents be successfully “invaded” by a small group of non-empathetic agents who don’t identify with others? Or does there exist a strategy for deciding the optimal level of one’s empathy or extent of one’s identity set, that makes it evolutionarily stable?

Do we really want AI capable of learning ethics the human way? Our socio-ethical point of view has been forged in the fires of countless wars and an unbroken tradition of committing horrific atrocities. We broke a lot of eggs on our way to making the omelet that is human society.

And, it’s fair to say we’ve got a lot of work left yet. Teaching AI our ethics and then training it to evolve like we do could be recipe for automating disaster.

It could also lead to a greater philosophical understanding of human ethics and the ability to simulate civilization with artificial agents. Maybe the machines will deal with uncertainty better than humans historically have.

Either way, the research is fascinating and well worth the read. You can check it out here on arXiv.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top