Human-centric AI news and analysis

This article was published on April 8, 2021


Intel’s new AI helps you get just the right amount of hate speech in your game chat

I guess every problem looks like something a toggle can address when you're a microprocessor manufacturer

Intel’s new AI helps you get just the right amount of hate speech in your game chat
Tristan Greene
Story by

Tristan Greene

Editor, Neural by TNW

Tristan covers human-centric artificial intelligence advances, politics, queer stuff, cannabis, and gaming. Pronouns: He/him Tristan covers human-centric artificial intelligence advances, politics, queer stuff, cannabis, and gaming. Pronouns: He/him

The Intel microprocessor company was founded in 1968. It’s bushwhacked a trail of technology and innovation in the decades since to become one of the leading manufacturers of computer chips worldwide.

But never mind all that. Because we live in a world where Kodak is a failed cryptocurrency company that’s now dealing drugs and everyone still thinks Elon Musk invented the tunnel.

Which means that here in this, the darkest timeline, we’re stuck with the version of Intel that uses AI to power “White nationalism” sliders and “N-word” toggles for video game chat.

Behold ‘Bleep,’ in all its stupid glory:

What you’re seeing is the UI for Bleep. An AI-powered software solution featuring a series of sliders and toggles that allow you to determine how much hate speech, in a given category, you want to hear when you’re chatting with people in multiplayer games.

If you’re wondering what or who this is for: join the crowd.This feels like the kind of solution you get when you apply the “there are no bad ideas” and “failure is not an option” philosophies in equal parts to a problem you have no business addressing in the first place.

To be clear, I’m saying: even if it worked perfectly, Bleep is just a Rube Goldberg machine that replaces your mute button. Censoring the potty words doesn’t help anyone when the context and their absence make the speaker’s intention clear anyway.

Hate speech isn’t a magic invocation we must treat like “He Whose Name We Do Not Say.” It’s a problem that needs to be addressed at social levels far beyond anything Intel can solve with deep learning. And, furthermore, I don’t think it’s going to work.

I think Intel’s AI division is incredible and they do amazing work. But I don’t think you can solve or even address hate speech with natural language processing. In fact, I believe it’s sadly ironic that anyone would try.

AI is biased against any accent that doesn’t sound distinctly white and American or British. Couple that fact with these tidbits: humans struggle to identify hate speech in real-time, hate speech evolves at the speed of memes, and there should be no acceptable level of hate speech permitted (thus tacitly endorsed) by a company through an interactive software interface.

The time Intel spent developing and training an AI to determine how much hate speech directed at various minority groups crossed the line between “none,” “some,” “most,” and “all,” or teaching it to detect the “N-word and all its variants,” could have been spent doing something more constructive.

Bleep, as a solution, is an insult to everyone’s intelligence.

And we’re never going to forget it. Perhaps Intel’s never heard of this thing called social media where, from now until the end of time, we’ll see images from the Bleep interface used to de-nuance the discourse on racial injustice.

So, thanks for that. At least the UI looks good. 

Also tagged with