You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on January 3, 2022

Scientists say social interaction is ‘the dark matter of AI’

I am, therefore I think


Scientists say social interaction is ‘the dark matter of AI’

A pair of researchers from the University of Montreal today published a pre-print research paper on creating “more intelligent artificial agents” by imitating the human brain. We’ve heard that one before, but this time’s a little different.

The big idea here is all about giving artificial intelligence agents more agency.

According to the researchers:

Despite the progress made in social neuroscience and in developmental psychology, only in the last decade, serious efforts have started focusing on the neural mechanisms of social interaction, which were seen as the “dark matter” of social neuroscience.

Basically, there’s something other than just algorithms and architecture that makes our brains tick. According to the researchers, this “dark matter” is social interactions. They argue that AI must be capable of “subjective awareness” in order to develop the necessary neurological connections required to display advanced cognition.

Per the paper:

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The study of consciousness in artificial intelligence is not a mere pursuit of metaphysical mystery; from an engineering perspective, without understanding subjective awareness, it might not be possible to build artificial agents that intelligently control and deploy their limited processing resources.

Making an AI as smart as a human isn’t a simple matter of building bigger supercomputers capable of running faster algorithms.

Current AI systems are nowhere near the cognition abilities of a human. In order to bridge that gap, the researchers say agents will need three things:

  • Biological plausibility
  • Temporal dynamics
  • Social embodiment

The “biological plausibility” aspect involves creating an AI architecture that imitates the human brain’s. This means creating a subconscious layer that’s distinct from, yet connected to, a dynamic consciousness layer.

Because our subconsciousness is intrinsically-related to controlling our body, the scientists appear to propose building AI with a similar brain-body linkage.

According to the researchers:

Specifically, the proposal is that the brain constructs not only a model of the physical body but also a coherent, rich, and descriptive model of attention.

The body schema contains layers of valuable information that help control and predict stable and dynamic properties of the body; in a similar fashion, the attention schema helps control and predict attention.

One cannot understand how the brain controls the body without understanding the body schema, and in a similar way one cannot understand how the brain controls its limited resources without understanding the attention schema.

As for “temporal dynamics,” the researchers suggest that artificial agents need to be able to exist in the world in much the same way as humans do. This is similar to how our minds work in that we don’t just interpret information, we process it in relation to our environment.

As the researchers put it:

In nature, complex systems are composed of simple components that self-organize in time, producing ultimately emergent behaviors that depend on the dynamical interactions between the components.

This makes understanding how time affects both an agent and its environment a necessary component of the proposed models.

And that brings us to “social embodiment,” which is essentially the creation of a literal body for the agent. The researchers claim the AI would need to be capable of social interaction on a level playing field.

According to the paper:

For instance, in human-robot interaction, a gripper is not limited to its role in the manipulation of objects. Rather, it opens a broad array of movements that can enhance the communicative skills of the robot and, consequently, the quality of its possible interactions.

Ultimately, there’s no true road map toward human-level AI. The researchers are attempting to bring the worlds of cognitive and computer science together with engineering and robotics in a way we haven’t seen before.

But, arguably, this is just another attempt to squeeze a miracle out of deep learning technology. Short of a new calculus or class of algorithm, we might be as close to human-level AI agents as traditional reinforcement learning can take us.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with