Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on May 1, 2019

Revenge of the nerds: Facebook is developing human-level AI to fight bullying


Revenge of the nerds: Facebook is developing human-level AI to fight bullying Image by: Anthony Quintano / Flickr

Facebook might be taking its bullying problem a lot more seriously than you think. Rather than hire more humans to flag troubling content, it plans on developing AI with human level intelligence to do the job.

An official company blog post today, on the subject of content moderation, laid out a road map for machine learning solutions to bullying on Facebook:

One potential answer is an approach that Facebook Chief AI Scientist, Yann LeCun, has been discussing for years: self-supervision. Instead of relying solely on data that’s been labeled for training purposes by humans — or even on weakly supervised data, such as images and videos with public hashtags — self-supervision lets us take advantage of entirely unlabeled data. The approach is inherently versatile, enabling self-supervised systems to use a small amount of labeled data to generalize to unseen tasks, and potentially bringing us closer to our goal of achieving AI with human-level intelligence.

Taken at face value, “bringing us closer to our goal of achieving AI with human-level intelligence” might seem like a calculated PR remark meant to drum up hype for LeCun’s latest deep learning breakthrough. But there’s nearly 2,000 words of evidence preceding that statement in which the company demonstrates its clearly shifting directions and will research and develop solutions differently.

LeCun is a world-renowned AI expert and a staunch advocate of deep learning. He feels it’s at least part of the path forward for artificial general intelligence (human-level AI) research:

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Today’s news indicates the company is backing his vision. According to the blog post, Facebook has made improvement in its AI systems for detecting messages, images, video, and audio that contain content that violates its policies, but there’s still work to be done. According to the social network, the change to self-supervised learning is necessary:

The majority of our systems today rely on supervised training. This can lead to a range of training challenges, such as a scarcity of training data in some cases, and long training times as we gather and label examples to build new classifiers from scratch. Since new instances of content violations evolve quickly, and events such as elections have become flashpoints for harmful content, we have a responsibility to speed the development of systems that can improve our ability to respond.

Facebook’s work in the field of self-supervised training has, at times, appeared to be a solution in search of a problem. But we think bullies are a great target for the world’s first human-level artificial intelligence — or at least the algorithms that the talented humans at Facebook will develop in pursuit of AGI.

Human level intelligence may not even be possible with AI, or with deep learning, but we’ll never know unless researchers like LeCun and the other developers at Facebook continue to pull on research threads. Neither Rome nor Facebook were built in a single day.

You can read all about the company’s breakthroughs in natural language processing, computer vision, and more in the blog post here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with