This article was published on October 24, 2017

US government is clueless about AI and shouldn’t be allowed to regulate it


US government is clueless about AI and shouldn’t be allowed to regulate it

Lobbyists for Google and Amazon today appeared in Washington to caution lawmakers against legislation, taxation, or regulation that could hamper the develop of AI in America.

The concern over AI has been largely fueled by speculation. When Elon Musk expressed his concerns that AI would be the most likely cause of World War III, he was predicting a dark turn. The reality of AI right now isn’t quite so dramatic, but it’s no less worrisome.

Facebook and Google have been called to task over algorithms in the wake of election tampering and concerns about fake news. The algorithm – a set of instructions for a computer that tells it how to interpret data and make decisions – has taken a beating in the media lately.

It’s possible we may be taking for granted the billions of times AI has served our interests without fail, because of the bias we’ve allowed to creep into our machine-learning codes. The answer to these problems doesn’t lie in restricting the growth of AI, which is what regulations — at this point — would likely do.

Regulation could destroy America’s chances in the AI race – a sprint it doesn’t have a head start in, thanks to China’s all-in policy. If the Trump administration sees fit to place restrictions on AI development that hamper Silicon Valley’s ability to compete with Beijing, it’ll lose more than just market shares. It could lose military superiority over countries like China and Russia.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

We also can’t allow the robots to run amok without clear guidance either, which is why common sense has to prevail. Worrying about the singularity is a bit like obsessing over the end of days; society is better served by concerning itself with directing progress.

Instead of regulating something that your average politician – and person – doesn’t have the experience or education to understand, we should allow companies to operate within the limits of the law.

The nature of machine-learning means that the problems of bias in code may take awhile to work out. Developers are aware of the problems, but we’re still in the beginning days of the technology, it takes time to fix problems we’ve never, as a society, dealt with before.

That doesn’t mean the government should regulate the development of AI with potentially stifling rules: instead it could just opt-out.

The AI Now Institute, whose membership includes researchers from both Microsoft and Google, (whose roles with AI Now are separate from their roles at the big companies) said in a blog post:

Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems.

US government agencies can choose not to use black-box AI – algorithms that make decisions we can’t explain – and instead encourage the development of more robust systems that meet a specific ethical criteria.

America’s answer to the AI problem isn’t regulation, it’s ethics. The nation already has a blueprint in the Department of Transportation’s approach to driverless car technology.

The US can’t afford to risk throwing the baby out with the bathwater when it comes to concerns over algorithms.

 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top