Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on October 3, 2023

Social media has new moderation problems. This AI startup has a solution

Unitary just bagged fresh funding for its content moderation system


Social media has new moderation problems. This AI startup has a solution

Social media is in the crosshairs of European regulators. In both the EU and UK, sweeping new laws now require platforms to identify and remove illegal content — or face severe penalties.

Unitary, a startup based in London, has proposed a solution: an AI-powered moderation tool for videos and images.

The system is designed to simultaneously analyse multiple signals. As a result, Unitary can understand both what the content is and the context in which it appears. Textual, aural, and visual cues all incorporated in the interpretation.

According to Unitary CEO Sasha Haco, the tool comprehends content in the same way as you and I.

“We have built models with nuanced understanding, which are able to mirror the approach a human would take to comprehending content, and make complex decisions,” Haco told TNW.

It’s an approach that’s attracting growing interest from investors. Unitary today announced that it’s raised $15 million in a Series A funding round.

The new cash injection comes at an opportune moment.

New rules

Just weeks ago, the UK passed the Online Safety Bill (OSB), which requires social media firms to keep illegal material off their platforms host. Those that fail to take effective action face fines up to £18mn (€20.8mn) or 10% of their global annual revenue — whichever is biggest.

In addition to the OSB, companies now have to comply with the EU’s Digital Services Act, another content moderation regulation. The law came into force last month.

The two rule books have expanded the market for Unitary. By using the machine learning system, companies can quickly scan and categories online content.

Any risky videos and images can then be flagged for action. Unitary can also ensure that content reaches the right audience — which could satisfy OSB rules on child protection.

“AI is going to be the future of content moderation.

To identify prohibited material, Unitary’s customers send content to an API. The company’s multi-modal models then analyse the imagery. Finally, the system returns a classification to the client.

The customer determines the next move. They may choose to remove a piece of content, restrict the audience, or warn a user.

“Every platform has its own risk appetite — and depending on their needs, they may be prioritising false positives over false negatives, or vice versa,” Haco said. “We work closely with our customers to make sure we’re meeting their specific set of needs.”

Those needs are set to grow.

The video boom

Complex video content, which makes up 80% of internet traffic, is predicted to increase the quantity of information online by a factor of ten between 2020 and 2025. Human reviewers aren’t equipped to cope with the scale — and the horrors — of moderating all that content.

“We definitely think that AI is going to be the future of content moderation,” Haco said. “Generative AI is going to fundamentally alter the landscape of trust and safety — in terms of the types of harm, the volume of content, and methods for successful moderation.

As the content grows and the laws around it tighten, Unitary’s variety of clients is also expanding.  As well as social media firms, the company currently works with ad verification platforms, influencer marketing companies, and dating apps.

Haco, who cofounded Unitary in 2019 alongside CTO James Thewlis, envisions her startup becoming a foundational service.

“Long term, we see ourselves as a horizontal content understanding layer across the internet. If Cloudflare is the middleware layer providing speed and security of traffic, Unitary is providing the safety and understanding of content.”

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with