
Google CEO Sundar Pichai yesterday published his companyās new rules governing the development of AI. Over the course of seven principles he lays out a broad (and useless) policy leaving more wiggle room than a pair of clown pants.
If you tell the story of Googleās involvement in building AI for the US military backwards it makes perfect sense. In such a case, the tale would begin with the Mountain View company creating a policy for developing AI, and then it would use those principles to guide its actions.
Unfortunately the reality is the company has been developing AI for as long as itās been around. Itās hard to gloss over the fact that only now, after the companyās ethics are being called into question over a military contract, is the CEO concerned about having these guidelines.
Of course, this isnāt to suggest that itās a company thatās been developing AI technology with no oversight. In fact itās clear that Google engineers, researchers, and scientists are among the worldās finest and many of those employees are of the highest ethical character. But at the company level, it feels like the lawyers are running the show.
No, my point is to suggest that Pichaiās blog post is nothing more than thinly-veiled trifle aimed at technology journalists and other pundits in hopes weāll fawn over the declarative statements like āGoogle wonāt make weapons.ā Unfortunately thereās no substance to any of it.
It starts with the first principle of Googleās new AI policy: be socially beneficial. This part lays out lip service saying it will strive to develop AI that benefits society, but doesnāt discuss what that means or how itāll accomplish such an abstract principle.
Oddly, the final sentence under principle one is āAnd we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.ā Thatās just a word salad with no more depth than saying āGoogle is a business that will keep doing business stuff.ā
Instead of ābe socially beneficial,ā I would have much preferred to see something more like ārefuse to develop AI for any entity that doesnāt have a clear set of ethical guidelines for its use.ā
Unfortunately, as leaked emails show, Googleās higher-ups were more concerned with government certifications than ethical considerations when they entered into a contract with the US government ā an entity with no formal ethical guidelines on the use of AI.
In appearance, each of the seven principles laid out by Pichai are general bullet points that read like cover-your-own-ass statements. And, each corresponds with a very legitimate concern that the company seems to be avoiding discussing. After the aforementioned first principle, it just gets more vapid:
- āAvoid creating or reinforcing unfair bias.ā This, instead of a commitment to developing methods to fight bias.
- āBe built and tested for safety.ā Pichai says āWe will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.ā Itās interesting that Pichaiās people donāt seem to think thereās any risk of unintended consequences for teaching the military how to develop image processing AI for drones.
- āBe accountable to people.ā Rather than ādevelop AI with transparency,ā which would be great, this just says Google will ultimately hold a human responsible for creating its AI.
- āIncorporate privacy design principles.ā Apple just unveiled technology designed to keep big data companies from gathering your data. Google just said it cares about privacy. Actions speak louder than words.
- āUphold high standards of scientific excellence.ā Googleās research happens inside of an internal scientific echo chamber. Numbers 4, 5, and 6 should be replaced with ābe transparent.ā
- āBe made available for uses that accord with these principles.ā In this same document Pichai points out that Google makes a large amount of its work in AI available as open-source code. Itās easy to say youāll only develop AI with the best of intentions and use it for only good, as long as you take no responsibility for how itās used once your companyās geniuses finish inventing it.
Pichaiās post on Googleās AI principles serve little more purpose than to, perhaps, eventually end up as a hyperlinked reference in a future apology.
If Google wants to fix its recently-tarnished reputation, it should take the issue of developing AI serious enough to come up with a realistic set of principles to guide future developmentā one that addresses the ethical concerns head on. Itās current attempt is nothing more than seven shades of gray area, and that doesnāt help anyone.
Get the TNW newsletter
Get the most important tech news in your inbox each week.