Better artificial intelligence is often associated with the detriment of a human workforce, but judging by some new practices by the team at Wikipedia HQ, it doesn’t have to be that way.
In a report over at Wired, computer scientist Aaron Halfaker describes how Wikipedia recently began implementing an AI system he designed to detects vandalism and bogus edits on articles using machine learning – it can identify common patterns in vandalous edits, like a tendency for impropercharacterspacing.
All Killer, No Filler
We’re bringing Momentum to New York: our newest event, showcasing only the best speakers and startups.
While on one hand that means less work for volunteers who look out for nefarious changes, Wikipedia belies the change will help reel in a swarm of new editors.
It’s all about removing the barrier for entry. Because Wikipedia crowdsources its articles, it has to implement strict rules on who can make changes to major documents in order to prevent articles from being vandalized. The other side of the coin is that it discourages many folk with good intentions and solid information.
The hope is that by having smarter AI detection of bogus articles and edits, more manpower will be devoted to legitimate content. When the policing machines are smart enough, Wikipedia could then relax its rules for newcomers a bit.
Still, some will worry whether the machines will ever get smart enough to replace a significant portion of human editors.