Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on March 7, 2020

The case for an AI that puts nature and ethics first, not humans


The case for an AI that puts nature and ethics first, not humans

Did you know TNW Conference has a track fully dedicated to bringing the biggest names in tech to showcase inspiring talks from those driving the future of technology this year? Tim Leberecht, who authored this piece, is one of the speakers. Check out the full ‘Impact‘ program here.

On July 20, 1969, the first human landed on the moon. Fifty years later we are in desperate need for another “moonshot” to tackle some of the pressing and overwhelmingly big issues of our time — from the climate crisis to the decline of democracy to the upheavals to our labor markets and societies caused by the rise of exponential digital technology — especially Artificial Intelligence (AI).

For the past decade, we put our faith in technology as the ultimate problem-solver, and any kind of innovation was tied to technological advances. But as Silicon Valley has lost some of its halo, and arguably, legitimacy, we have come to realize that the most critical factor in enabling a humane future are us humans, and specifically how we relate to one another and the planet we inhabit. The real moonshot of our time is ecological, social, and emotional innovation.

But make no mistake: AI is here, and it is going to change everything. But are these positive changes? And with AI having such a big impact on the way we work, live, play, and even love, are we thinking big enough? How can AI be our companion in our quest to enable not just our future, but our humanity?

“The business models of the next 10,000 startups are easy to forecast: Take X and add AI,” Wired founder Kevin Kelly proclaimed in 2016. That may have proven true, but at the same it is disappointing to see that most of the breakthrough AI applications, from pattern analysis based on massive amounts of data, reinforcement learning in the style of Deep Mind’s Alpha Go to generative adversarial networks performing creative tasks, have been designed and employed to primarily enhance efficiencies (for the enterprise) and/or convenience (for the consumer).

While those are valuable benefits, the concern is growing that we are surrendering to a paradigm of “forced reductionism” (to borrow a term from former MIT Media Lab director Joi Ito), shoehorning ourselves into a purely mechanistic, utilitarian model of technology. As AI becomes more and more powerful and invasive, it may inevitably change our world to align with these very design principles. The consequence might be a world full of “monochrome societies,” as Infineon CEO Dr. Reinhard Pless puts it.

There are other worries: non-benign actors, unconscious and conscious bias informing algorithms and fomenting a new digital divide, manipulation and even oppression, the threat of a surveillance society, humans turning into super-optimized machines, and not the least super-intelligence soon potentially dominating humans or eventually rendering us obsolete.

Finally, there is a more philosophical problem that cuts to the heart of the matter: today’s AI is based on a binary system, in the tradition of Aristotle, Descartes, and Leibniz. AI researcher Twain Liu argues that “Binary reduces everything to meaningless 0s and 1s, when life and intelligence operates XY in tandem. It makes it more convenient, efficient, and cost-effective for machines to read and process quantitative data, but it does this at the expense of the nuances, richness, context, dimensions, and dynamics in our languages, cultures, values, and experiences.”

We take some cues from nature, which is anything but binary. Quantum research, for example, has shown that particles can have entangled superposition states where they’re both 0 and 1 at once — just like the Chinese concept of YinYang, which emphasizes the symbiotic dynamics of male and female the universe and in us. Liu writes: “Nature doesn’t pigeonhole itself into binaries — not even with pigeons. So why do we do it in computing?”

There is another reason we should study nature when it comes to the future of AI: Nature is superseding digital programming, as the tech historian George Dyson argues. He points out that there is no longer any algorithmic model capable of grasping the beautiful chaos manifest in Facebook’s dynamic graph. Facebook is a machine no other machine can comprehend, let alone human intelligence. He writes: “The successful social network is no longer a model of the social graph, it is the social graph.” And further: “What began as a mapping of human meaning now defines human meaning, and has begun to control, rather than simply catalog or index, human thought.”

He concludes: “Nature relies on analog coding and analog computing for intelligence and control. No programming, no code. To those seeking true intelligence, autonomy, and control among machines, the domain of analog computing, not digital computing, is the place to look.”

This indicates that any more sophisticated vision of AI must go beyond three current conceptual limitations: it must shift from binary to intersectional, from efficiency to effectiveness, from exploitation to embedment in nature.

While concepts of ethical, explainable, or responsible AI are laudable, they are not enough, for they are all still stuck within the confines of us wanting to regulate problem-solving AI. But we must stop treating AI as the great problem-solver and overcome our engineering mindset. Rather, we ought to think of AI more holistically, not just with regard to its purpose and outcomes, but the way it operates.

Drawing from the humanities and the arts, and steeped into our tradition of discourse and critical thinking, AI must be ethical, but not just in the sense of extrinsic compliance, but in the sense of true caring. It must honor the truth, which means, it must sometimes be content with solutions that are not the most impactful, fastest, or cost-efficient.

If we reduce AI to being the great optimizer, it will optimize us to death. To tie AI to human dignity, we must treat it with dignity ourselves. To ensure we are not ending up with a “monochrome society” of soulless machines, we must instill soul into AI.

This, however, implies we move beyond the type of anthropocentrism that is lurking behind common denominator terms such as “human-centered AI” which are borrowed from the world of design and now promoted by institutions such as the eponymous Stanford Institute for Human-Centered Artificial Intelligence or “humane technology,” a term popularized by the Center for Humane Technology. Even the focus on “human wellbeing” espoused by the meticolous IEEE (the global professional organization of engineers) ethical AI standards appears to fall short of addressing the most stubborn cognitive bias underlying all of our efforts around AI — we are, for what it’s worth and certainly understandably, biased towards humans.

Yet in a time of pending ecological disaster caused by our careless, selfish, and even willfully ignorant exploitation of planetary resources, it is becoming more and more evident that the most existential threat not just to our own wellbeing but that of the world around us (of which we are a small and fleeting part, in the grand scheme of things) is us. “Human-centered” AI focused on promoting human wellbeing and flourishing can therefore no longer be an undisputed goal. An ecologically conscious and ethical AI must transcend the anthropocentrism shaped by rationalist and neoliberal thinking.

An artist’s representation of ancestors joining a group in conversation at the AI workshop in Hawai‘i. | Image by Sergio Garzon. Courtesy of the Initiative for Indigenous Futures.

One possible alternative approach can be found in non-Western cultures. Japan’s animist Shinto culture, for example, believes that both animate and inanimate things have a spirit: from the dead to every animal, every flower, every particle of dust, every machine. After a century of worshipping human ingenuity and technology in increasingly secularized modern societies, animism invites us to return to a polytheistic world view.

Like animism, indigenous communities worldwide assume all things are interrelated. “Indigenous epistemologies do not take abstraction or generalization as a natural good or higher order of intellectual engagement,” the indigenous scholars Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite write in an article for MIT. Indigenous cultures offer rituals and protocols to respect and relate to “our non-human kin,” for “man is neither height nor center of creation.” The authors propose that “we, as a species, figure out how to treat these new non-human kin respectfully and reciprocally — and not as mere tools, or worse, slaves to their creators.”

This includes AI, which they ask us to accept into our “circle of kinship.”

Such Indigenous AI honors multiplicity over singularity, a non-linear over a linear concept of time (and progress), interiority over externalized knowledge, relationships over transactions, and quality of life as the health of people and land — of all animate or inanimate things.

Only this new kind of AI can overcome the dualism that has led to the exploitation of resources and a cynical winner-takes-all mentality. It enables us humans to foster innovation across different generations, cultures, and socio-economic strata, not just within our homogenous tribes. It allows us to collectively tackle the really big problems of our time such as the climate crisis or the growing rift in our societies and the need to relate to the “other,” including our non-human kin.

There is a word for this kind of AI: beautiful.

Beautiful implies what is essentially human and at the same greater than us: aesthetics, ethics, and the interconnected ecology we inhabit. It describes a sensorial relationship to the world, one of harmony and attunement. It also means bio- and neuro-diversity: the concept of our relationships, organizations, and our work as gardens, not machines, as a broad spectrum of ethnic, cultural, cognitive, and emotional identities that are fluid and not necessarily consistent.

Beautiful is what concerns us, what touches us and yet transcends us. Beauty is the end, not just the means. Beauty is quality. Beauty is the quality.

This article was originally published by Tim Leberecht, an author, entrepreneur, and the co-founder and co-CEO of The Business Romantic Society, a firm that helps organizations and individuals create transformative visions, stories, and experiences. Leberecht is also the co-founder and curator of the House of Beautiful Business, a global think tank and community with an annual gathering in Lisbon that brings together leaders and changemakers with the mission to humanize business in an age of machines.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with