This article was published on September 1, 2022

Forget chess, DeepMind’s training its new AI to play football

This is cool, but what's the AI's rank in FIFA 22?


Forget chess, DeepMind’s training its new AI to play football

Researchers from DeepMind, the UK’s juggernaut AI lab, have forsaken the noble games of chess and Go for a more plebeian delight: football.

The Google sister company yesterday published a research paper and accompanying blog post detailing its new neural probabilistic motor primitives (NPMP) — a method by which artificial intelligence agents can learn to operate physical bodies.

Per the blog post:

An NPMP is a general-purpose motor control module that translates short-horizon motor intentions to low-level control signals, and it’s trained offline or via RL by imitating motion capture (MoCap) data, recorded with trackers on humans or animals performing motions of interest.

Up front: Essentially, the DeepMind team created an AI system that can learn how to do things inside of a physics simulator by watching videos of other agents performing those tasks.

And, of course, if you’ve got a giant physics engine and an endless supply of curious robots, the only rational thing to do is to teach it how to dribble and shoot:

According to the team’s research paper:

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

We optimized teams of agents to play simulated football via reinforcement learning, constraining the solution space to that of plausible movements learned using human motion capture data.

Background: In order to train AI to operate and control robots in the world, researchers have to prepare the machines for reality. And, outside of simulations, anything can happen. Agents have to deal with gravity, unexpectedly slippery surfaces, and unplanned interference from other agents.

The point of the exercise isn’t to build a better footballer — Cristiano Ronaldo has nothing to fear from the robots, for now — but instead to help the AI and its developers figure out how to optimize the agents’ ability to predict outcomes.

As the AI starts its training, it’s barely able to move its physics-based humanoid avatar around the field. But, by rewarding an agent every time its team scores a goal, the model is able to get the figures up and running in around 50 hours. After several days of training, the AI begins to predict where the ball will go and how the other agents will react to its movement.

Per the paper:

The result is a team of coordinated humanoid football players that exhibit complex behavior at different scales, quantified by a range of analysis and statistics, including those used in real-world sport analytics. Our work constitutes a complete demonstration of learned integrated decision-making at multiple scales in a multiagent setting.

Quick take: This work is pretty rad. But we’re not so sure it represents a “complete demonstration” of anything. The model is obviously capable of operating an embodied agent. But, based on the apparently cherry-picked GIFs on the blog post, this work is still deeply in the simulation phase.

The bottom line here, is that the AI isn’t “learning” how to play football. It’s brute-forcing movement within the boundaries of its simulation. That may seem like a minor quibble, but the results are quite evident:

Credit: DeepMind

The above AI agent looks absolutely terrified. I don’t know what it’s running away from, but I’m certain that it’s the scariest thing ever.

It moves like an alien wearing a human suit for the first time because, unlike humans, AI cannot learn by watching. Systems like the one DeepMind trained parse thousands of hours of video and, essentially, extricate motion data about the subject their trying to “learn” from.

However, it’s almost certain that these models will become more robust as time goes on. We’ve seen what Boston Dynamics can do with machine learning algorithms and pre-programmed choreography.

It’ll be interesting to see how more adaptive models, such as the ones being developed by DeepMind, will fare once they move beyond the laboratory environment and into actual robotics applications.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top