Better artificial intelligence is often associated with the detriment of a human workforce, but judging by some new practices by the team at Wikipedia HQ, it doesn’t have to be that way.
In a report over at Wired, computer scientist Aaron Halfaker describes how Wikipedia recently began implementing an AI system he designed to detects vandalism and bogus edits on articles using machine learning – it can identify common patterns in vandalous edits, like a tendency for impropercharacterspacing.
New York, meet the world’s tech scene
5,000 Tech leaders are coming to NYC this November to learn and do business. This is your chance to join them.
While on one hand that means less work for volunteers who look out for nefarious changes, Wikipedia belies the change will help reel in a swarm of new editors.
It’s all about removing the barrier for entry. Because Wikipedia crowdsources its articles, it has to implement strict rules on who can make changes to major documents in order to prevent articles from being vandalized. The other side of the coin is that it discourages many folk with good intentions and solid information.
The hope is that by having smarter AI detection of bogus articles and edits, more manpower will be devoted to legitimate content. When the policing machines are smart enough, Wikipedia could then relax its rules for newcomers a bit.
Still, some will worry whether the machines will ever get smart enough to replace a significant portion of human editors.