In Shoshana Zuboff’s 2019 book The Age of Surveillance Capitalism, she recalls the response to the launch of Google Glass in 2012. Zuboff describes public horror, as well as loud protestations from privacy advocates who were deeply concerned that the product’s undetectable recording of people and places threatened to eliminate “a person’s reasonable expectation of privacy and/or anonymity.”
Zuboff describes the product:
Google Glass combined computation, communication, photography, GPS tracking, data retrieval, and audio and video recording capabilities in a wearable format patterned on eyeglasses. The data it gathered — location, audio, video, photos, and other personal information — moved from the device to Google’s servers.
At the time, campaigners warned of a potential chilling effect on the population if Google Glass were to be married with new facial recognition technology, and in 2013 a congressional privacy caucus asked then Google CEO Larry Page for assurances on privacy safeguards for the product.
Eventually, after visceral public rejection, Google parked Glass in 2015 with a short blog announcing that they would be working on future versions. And although we never saw the relaunch of a follow-up consumer Glass, the product didn’t disappear into the sunset as some had predicted. Instead, Google took the opportunity to regroup and redirect, unwilling to turn its back on the chance of harvesting valuable swathes of what Zuboff terms “behavioral surplus data”, or cede this wearables turf to a rival.
Instead, as a next move, in 2017 Google publicly announced the Glass Enterprise Edition in what Zuboff calls a “tactical retreat into the workplace.” The workplace being the gold standard of environments in which invasive technologies are habituated and normalized. In workplaces, wearable technologies can be authentically useful points of reference (rather than luxury items), and are therefore treated with less scrutiny than the same technologies in the public space. As Zuboff quips: “Glass at work was most certainly the backdoor to Glass on our streets”, adding:
The lesson of Glass is that when one route to a supply source [of behavioral data] encounters obstacles, others are constructed to take up the slack and drive expansion.
This kind of expansionism should certainly be on our minds right now as we survey the ways in which government and the tech industry have responded to the COVID-19 pandemic. Most notably in asking if the current situation — one in which the public are prepared to forgo deep scrutiny in the hopes of some solution — presents a real opportunity for tech companies to habituate surveillance technologies at scale? Technologies that have been previously met with widespread repugnance.
Syndromic surveillance
Over the last few days and weeks, the media have reported offers from tech companies looking to help governments stymy the spread of the coronavirus. Suggestions vary in content, but many (or most) could reasonably be classified as efforts to track and/or monitor the population in order to understand how the virus moves — known as “syndromic surveillance.”
On Monday, Facebook’s Data for Good team announced new tools for tracking how well we’re all social distancing by using our location data. Facebook were following hot on the heels of Google, who promised to do something very similar just last week. According to reports, the readouts from Google’s data stash will reveal phenomenal levels of detail, including “changes in visits to people’s homes, as determined by signals such as where users spend their time during the day and at night.”
This granular data is intended to inform government policy decisions, and ultimately influence public behavior to curtail the spread of the virus. This end purpose is, of course, an extremely noble one: saving human lives. This is a cause that legitimizes most methods. Nevertheless, we should not let our sheer desperation to stop this abominable disease blind us to some of the burgeoning concerns surrounding tech’s enthusiastic rollout of unprecedented intrusion.
Control concerns
It’s almost reflexive now to look to China when discussing the excessive deployment of technological surveillance tools. Not unexpectedly, the Chinese government has turned the COVID-19 outbreak into an opportunity to flex their surveillance tech muscles, while baking ever more controls into the daily lives of citizens.
Authorities have been monitoring smartphones, using facial recognition technology to detect elevated temperatures in a crowd or those not wearing face masks, and obliging the public to consistently check and self report their medical condition for tracking purposes. The Guardian, further reported:
Getting into one’s apartment compound or workplace requires scanning a QR code, writing down one’s name and ID number, temperature and recent travel history. Telecom operators track people’s movements while social media platforms like WeChat and Weibo have hotlines for people to report others who may be sick. Some cities are offering people rewards for informing on sick neighbors.
But this is what we’ve come to expect from China. Perhaps more surprising is that similar pervasive tracking techniques have been adopted in so many other COVID-19 hotspots around the globe. This silent, yet penetrative policing is still unfamiliar to the public in most areas stricken by the coronavirus.
The New York Times reported that in Lombardy, Italy, local authorities are using mobile phone location data to determine whether citizens are obeying lockdown, and in Israel, Prime Minister Benjamin Netanyahu has authorized surveillance technology normally reserved for terrorists to be used on the broader population.
In countries like the UK and the US, the announcement of each new tracking technology has been accompanied by an avalanche of privacy assurances. Yet, we’ve already seen a number of worrying instances where the vigilant monitoring of the pandemic has tipped over into boundary-crossing privacy lapses — like this tweet from New York’s Mayor Bill de Blasio.
And in Mexico, when public health officials notified Uber about a passenger infected with the virus, the company suspended the accounts of two drivers who had given him rides, then tracked down and suspended the accounts of a further 200 passengers who had also ridden with those drivers (NY Times).
The pandemic has unleashed a fresh government enthusiasm for using tech to monitor, identify, and neutralize threats. And although this behavior might seem like a natural response to a crisis, authorities should be alive to the dehumanizing aspects of surveillance, as well as the point at which they start to view the rest of us as mere scientific subjects, rather than active participants in societal efforts.
A false choice?
Of course, there are those who would willingly relinquish personal privacy in order to save lives. They believe that an end to this suffering justifies any action taken by governments and tech companies, even if it involves a rummage in our personal data cupboards. But what isn’t clear is the extent to which we can trust this as a straight transaction. After all, these are largely unproven technologies.
In the New York Times, Natasha Singer and Chloe Sang-Hun write:
The fast pace of the pandemic…is prompting governments to put in place a patchwork of digital surveillance measures in the name of their own interests, with little international coordination on how appropriate or effective they are.
And writing for NBC News’ THINK, Albert Fox Cahn and John Veiszlemlein similarly point out that the effectiveness of tech tracking pandemic outbreaks is “decidedly unclear”. They recount previous efforts, like Google Flu Trends, that were abandoned as failures.
In short, we could be giving up our most personal data for the sake of a largely ineffective mapping experiment.
Yuval Noah Harari argues that the choice between health and privacy is, in fact, a false one. He emphasizes the critical role of trust in achieving compliance and co-operation, and says that public faith is not built through the deployment of authoritarian surveillance technologies, but by encouraging the populace to use personal tech to evaluate their own health in a way that informs responsible personal choices.
Harari writes:
When people are told the scientific facts, and when people trust public authorities to tell them these facts, citizens can do the right thing even without a Big Brother watching over their shoulders. A self-motivated and well-informed population is usually far more powerful and effective than a policed, ignorant population.
He ends with a caution that we could be signing away personal freedoms, thinking it is the only choice.
The new (ab)normal
So, to return to our original question: has this dreadful pandemic provided legitimacy to an aggressive, pervasive surveillance that will carry on into the future? Are we witnessing the beginning of a new normal?
Nearly two decades after the 9/11 attacks, law enforcement agencies still have access to the high-powered surveillance systems that were instituted in response to imminent terror threats. Indeed, as Yuval Harari asserts, the nature of emergencies tends to be that the short-term measures they give rise to become fixtures of life on the premise that the next disaster is always lurking. He adds that, “immature and even dangerous technologies are pressed into service, because the risks of doing nothing are bigger.”
Whenever we eventually emerge from this difficult time, there is every chance that our collective tolerance for deep surveillance will be higher, and the barriers that previously prevented intrusive technologies taking hold will be lower. If we doubt this, it’s important to know that some tech companies are already openly talking about the pandemic in terms of an expansion opportunity.
Perhaps if our skins are thicker, and privacy becomes a sort of quaint, 20th century concern, we could worry less and enjoy greater security and convenience in a post-pandemic era?
If this seems appealing, then it’s worth remembering that the benefits of constant and penetrating surveillance, like disease tracking or crime detection, are offset in a range of different and troubling ways.
By allowing a permanent tech surveillance land grab, we simultaneously allow and embed a loss of anonymity, as well as an new onslaught of commercial and governmental profiling, cognitive exploitation, behavioral manipulation, and data-driven discrimination. To let this mission creep go on unchallenged would be to assent to a new status quo where we willingly play complacent lab rats for our information masters.
So, as we urgently will an end to this global devastation, let’s be attentive when it comes to the aftermath and clean-up, lest we immediately exchange one temporary nightmare scenario for another, more lasting one.
This article was originally published on The Startup by Fiona J McEvoy. She’s a tech ethics researcher and the founder of YouTheData.com.
Get the TNW newsletter
Get the most important tech news in your inbox each week.