Human-centric AI news and analysis

How Facebook’s Yann LeCun is charting a path to human-level artificial intelligence

Facebook's chief AI scientist told TNW about his work at FAIR, the social network's research lab

When Yann LeCun founded the Facebook AI Research (FAIR) lab in 2013, artificial intelligence was entering a boom period that his research helped trigger.  

Facebook’s chief AI scientist had been among a group of computer scientists who retained faith in deep neural networks during an “AI winter” of reduced funding and interest in the field. In 2019, his efforts earned him a share of the Turing Award, together with his friends Yoshua Bengio and Geoffrey Hinton. 

Today, AI is now an essential component of Facebook’s vast array of applications, touching everything from Messenger to content moderation.

“You take AI out of Facebook, and basically the services crumble,” LeCun tells TNW.

But fears are now emerging that another winter will soon arrive if AI can’t live up to its current hype, particularly around the promise of artificial general intelligence (AGI): the idea that a machine can perform any intellectual task a human can — and many that they can’t.

LeCun is not a fan of the term. He’s previously argued that “there is no such thing as AGI” because “human intelligence is nowhere near general.” However, he is keenly pursuing “human-level AI.” His chosen technique for reaching it is self-supervised learning.

In supervised learning, people painstakingly label data and then feed it to an algorithm to teach it what to look for when solving a problem. But in self-supervised learning — often confused with unsupervised learning — there’s no need for human annotation. Instead, the system generates signals from the data and uses them to train itself.

It’s already become an integral part of Facebook’s work on hate speech detection, language translation, and forecasting the spread of COVID-19LeCun compares the method to how babies learn by interacting with their surroundings. If machines can better replicate their technique, he believes they could gain the common sense they need to reach human-level intelligence:

Once we have a working methodology for that, we’ll have a tool that enables a machine to learn enormous amounts of knowledge about how the world works from physical reality — just by observing the world. They’d be able to learn world models that are predictive, which is essential to intelligence.

FAIR origins

FAIR began life after Mark Zuckerberg and Facebook CTO Michael Schroepfer identified AI as critical to the company’s long-term future.

“They were 100% right about this,” says LeCun, now Facebook’s chief AI scientist.

They decided to create a new research lab from scratch — and handpicked LeCun to lead it.

“What attracted me was there was a lot of opportunity.  But also, I had somewhat of a carte blanche to organize the lab in the way I thought would be the most successful.”

LeCun immediately made open research the cornerstone of his plans. FAIR now publishes almost all of its work, and open sources the majority of its code, datasets, and tools — such as PyTorch, a toolkit for quickly creating new machine learning models.

The motivation behind this approach isn’t purely altruistic. It allows Facebook to influence what researchers work on, foster collaborations with academia and industry, and attract talent to FAIR.

“The currency for a scientist is his or her intellectual impact on the community,” says LeCun. “So if you want to hire the best scientists in the world, and you tell them you can come here but you can’t talk about what you do — they’re not going to join.”

But isn’t he worried that Facebook‘s rivals could steal the lab’s secrets?

“That’s fine. Why would it be bad? The value of a lot of the technology we produce is multiplied by a coefficient that is basically Facebook’s ability to deploy them in its services. It’s much more difficult — even if it’s open source — for another company to deploy it in ways that would compete directly with us.”

On the contrary, sharing the research helps Facebook improve its own products:

The main problem we need to solve is not whether we are a few months ahead of Google in a particular piece of technology — because it’s never more than a few months — it’s more that we don’t have the science or the technology that we need for the stuff that we want to build. So we need to help the community advance as much as possible to open research. The fact that other people use it is irrelevant.

AI advances and challenges

Much of the fundamental research behind the AI in Facebook products now takes place at FAIR. Among its most impactful creations are Memory Networks, which improve how machines talk to people by helping them retain enough data to answer general knowledge questions. An influential 2014 paper showed how the approach could answer questions about the plot of the Lord of the Rings.

“DeepMind was working on a very similar idea exactly simultaneously,” LeCun recalls. “We posted a paper on arXiv, and then three days later DeepMind posted their paper, because they didn’t want to be completely scooped. ”

At that time, DeepMind was one of the driving forces behind London’s emergence as a global AI hotbed. FAIR had initially planned to join them by launching its first European lab in the city. But after studying the availability of talent, the company pivoted to Paris.

“It was a more difficult turf to enter, whereas continental Europe was completely open,” says LeCun. “There was essentially no ambitious fundamental research lab in AI or even in information technology in continental Europe really.”

The lab has since become one of FAIR’s largest AI research centers and the birthplace of many of its computer science breakthroughs. Its recent innovations include Facebook AI Similarity Search (FAISS), a tool for quickly finding videos, text, or images that are similar to each other. The system can be used to recommend Instagram posts or to detect extremist propaganda videos that have been tweaked and then reposted to evade removal.

FAIR has also helped Facebook use AI to find hate speech on the platform. But these systems have received criticism for not detecting every language.

However, self-supervised learning is expanding the linguistic range of the tools. But LeCun believes the technique will only reach its potential once it can reason like a human.

I think that once we’ve broken through that brick wall, we’ll make a significant advance in the capabilities of AI systems. But the thing is, there’s probably going to be a brick wall right behind it that we don’t know about right now — because it’s hidden from us. So we don’t know how many brick walls we have to go through to get to rat-level intelligence, cat-level intelligence, human-level intelligence.

We don’t know how long it’s going to take up to break through this first wall. It could be next year, it could be five years from now, it could be 10 years, it could be 20 years. There’s good hope it’s going to happen soon, but how do we know? And then we’ll most likely encounter other obstacles that we don’t realize exist.

So you like our media brand Neural? You should join our Neural event track at TNW2020, where you’ll hear how artificial intelligence is transforming industries and businesses. 

Published August 14, 2020 — 19:24 UTC