This article was published on June 8, 2018

Googleā€™s principles for developing AI arenā€™t good enough


Googleā€™s principles for developing AI arenā€™t good enough

Google CEO Sundar Pichai yesterday published his companyā€™s new rules governing the development of AI. Over the course of seven principles he lays out a broad (and useless) policy leaving more wiggle room than a pair of clown pants.

If you tell the story of Googleā€™s involvement in building AI for the US military backwards it makes perfect sense. In such a case, the tale would begin with the Mountain View company creating a policy for developing AI, and then it would use those principles to guide its actions.

Unfortunately the reality is the company has been developing AI for as long as itā€™s been around. Itā€™s hard to gloss over the fact that only now, after the companyā€™s ethics are being called into question over a military contract, is the CEO concerned about having these guidelines.

Of course, this isnā€™t to suggest that itā€™s a company thatā€™s been developing AI technology with no oversight. In fact itā€™s clear that Google engineers, researchers, and scientists are among the worldā€™s finest and many of those employees are of the highest ethical character. But at the company level, it feels like the lawyers are running the show.

No, my point is to suggest that Pichaiā€™s blog post is nothing more than thinly-veiled trifle aimed at technology journalists and other pundits in hopes weā€™ll fawn over the declarative statements like ā€œGoogle wonā€™t make weapons.ā€ Unfortunately thereā€™s no substance to any of it.

It starts with the first principle of Googleā€™s new AI policy: be socially beneficial. This part lays out lip service saying it will strive to develop AI that benefits society, but doesnā€™t discuss what that means or how itā€™ll accomplish such an abstract principle.

Oddly, the final sentence under principle one is ā€œAnd we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.ā€ Thatā€™s just a word salad with no more depth than saying ā€œGoogle is a business that will keep doing business stuff.ā€

Instead of ā€œbe socially beneficial,ā€ I would have much preferred to see something more like ā€œrefuse to develop AI for any entity that doesnā€™t have a clear set of ethical guidelines for its use.ā€

Unfortunately, as leaked emails show, Googleā€™s higher-ups were more concerned with government certifications than ethical considerations when they entered into a contract with the US government ā€“ an entity with no formal ethical guidelines on the use of AI.

In appearance, each of the seven principles laid out by Pichai are general bullet points that read like cover-your-own-ass statements. And, each corresponds with a very legitimate concern that the company seems to be avoiding discussing. After the aforementioned first principle, it just gets more vapid:

  1. ā€œAvoid creating or reinforcing unfair bias.ā€ This, instead of a commitment to developing methods to fight bias.
  2. ā€œBe built and tested for safety.ā€ Pichai says ā€œWe will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.ā€ Itā€™s interesting that Pichaiā€™s people donā€™t seem to think thereā€™s any risk of unintended consequences for teaching the military how to develop image processing AI for drones.
  3. ā€œBe accountable to people.ā€ Rather than ā€œdevelop AI with transparency,ā€ which would be great, this just says Google will ultimately hold a human responsible for creating its AI.
  4. ā€œIncorporate privacy design principles.ā€ Apple just unveiled technology designed to keep big data companies from gathering your data. Google just said it cares about privacy. Actions speak louder than words.
  5. ā€œUphold high standards of scientific excellence.ā€ Googleā€™s research happens inside of an internal scientific echo chamber. Numbers 4, 5, and 6 should be replaced with ā€œbe transparent.ā€
  6. ā€œBe made available for uses that accord with these principles.ā€ In this same document Pichai points out that Google makes a large amount of its work in AI available as open-source code. Itā€™s easy to say youā€™ll only develop AI with the best of intentions and use it for only good, as long as you take no responsibility for how itā€™s used once your companyā€™s geniuses finish inventing it.

Pichaiā€™s post on Googleā€™s AI principles serve little more purpose than to, perhaps, eventually end up as a hyperlinked reference in a future apology.

If Google wants to fix its recently-tarnished reputation, it should take the issue of developing AI serious enough to come up with a realistic set of principles to guide future developmentā€“ one that addresses the ethical concerns head on. Itā€™s current attempt is nothing more than seven shades of gray area, and that doesnā€™t help anyone.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with