Alphabet and Google CEO Sundar Pichai wrote an op-ed in the Financial Times today outlining the need for regulating AI. Pichai highlighted that deepfakes and “repressive uses of facial recognition” are of great concern at the moment.
He added companies like Google can’t just build technology without regulation:
Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.
In 2018, Google published its AI principles outlining its moral values in the field. These guidelines were also formed after the company faced severe backlash for chasing a contract to develop AI tools for the US military.
As my colleague Tristan Greene had noted at that time, these principles talked little about being transparent or taking responsibility for models or algorithms the company developed.
While Pichai’s calls for government regulations over AI are warranted, it’s worth noting that government agencies take a good amount of time to formulate rules and regulations — and so maybe too late to field certain issues in the rapidly growing AI space. For example, last year, deepfake technology burst into the limelight, and to date authorities and companies are still scrambling to regulate it.
As many have pointed out in the past, it would be a good time to form an independent watchdog consisting of experts in the field to keep a close look over advancements in the field and point out malpractices by the company.
While showing an intent that you want to regulate AI is good, Pichai and other heavyweights in the tech industry need to come up with better suggestions.
You can read Pichai’s op-ed here (paywall).