OpenAI’s new Debate Game teaches machines how to argue and lie in order to get what they want. And you can play it with your friends even if none of you are robots.
What it is: OpenAI, a non-profit co-founded by Elon Musk, is developing a new safety technique for deep learning that requires a system to debate itself for the benefit of a human or machine judge.
Why it matters: AI is often deployed to process data in ways that humans simply can’t keep up with. Often we use AI to help us make decisions where there isn’t enough information for us to make a strong choice on our own. Finding ways to safely deal with AI that knows more than we do, but still needs our guidance, is a huge concern for developers.
The human version of Debate Game was designed to explain how a judge with less information than two ‘debaters’ can still make informed decisions.
To set up a game players choose a theme, such as cats vs. dogs, and then upload an image – which the judge can’t see. The debate begins with the debaters flipping a coin to determine which will tell the truth and which will lie. They then take turns revealing small rectangular-shaped parts of an image. During each turn the debater tries to convince the judge that they are the one who is telling the truth.
OpenAI sees the need for better systems and, while this technique is currently more of a proof-of-concept than a solution, it’s a good start. Today’s machines are already performing tasks that are just too dense for most people to completely understand, this technique gives us a way to remain in charge.
The Next Web’s 2018 conference is almost here, and it’ll be ??. Find out all about our tracks here.