In the wake of the Cambridge Analytica scandal, all eyes are on Facebook and its data privacy practices. From policy-makers to financial markets, the reactions have been surprise and disapproval. Yet, collecting and monetizing user data has been standard modus operandi for the majority of today’s technologies.
Email providers, cellular networks, workplace collaboration apps, social media, credit monitoring, etc. – nearly all are powered by the ‘data contract’ we as users entered with a click, before we could appreciate the worth of the information we give away.
Today’s heightened focus on data privacy is a perfect opportunity to re-negotiate these out of balance ‘contracts’ to reflect the unprecedented value companies derive from accessing our data and the risks they create when it is compromised. These risks are now becoming tangible. From the elections to the exposed location data of US military personnel and personal records of millions of Equifax victims, technology users now understand that they have no recourse or control over how their data is used.
Engineered for speed and data access, not safety or privacy
Shifting the balance of the entire ecosystem warrants examining how technology is built today. It’s an open secret that most technology is engineered for speed and data access — not privacy and security. Mark Zuckerberg’s famous manifesto to “move fast and break things” is not unique to Facebook. Pushing a minimal viable product out to users as fast as possible to gain market share is the underlying sentiment of the agile internet economy.
As a result, we have mind-boggling advancements in functionality and AI, and an infinite trove of security vulnerabilities. The recent Spectre and Meltdown flaws that put nearly every computer in the world at risk are the latest examples of this conflict between engineering for speed rather than security.
If we built bridges, cars, and skyscrapers with the same attention to safety we apply to software, then the daily commute in San Francisco would require dodging falling buildings and crashing cars, jumping from BART trains, and swimming across the bay. Speed prevails at the expense of safety because the consequences of insecure software are not immediately and physically tangible. Combining unprotected data systems with service providers’ voracious appetite to maximize profits on the back of this very same data makes the task of safeguarding information nearly impossible.
Some, like Tim Cook, may say that this lack of privacy rises to compromising of our human rights. And yet even Apple is in hot pursuit of new ways to learn from user data. Today, nearly every company prioritizes mining user-generated data over protecting it.
Flexible vigilance required
After Facebook’s CEO testified in front of Congress, policy makers will likely keep calling for more regulations, arguing that their constituents’ consent to swap privacy for free services was not informed. In response, Zuckerberg will keep outlining the steps the company is taking to limit Facebook’s sharing of our data. The hearings were right to focus on protecting the users who are unable to negotiate a fair deal in our free market digital economy. However, legislators also need to ensure that we stay competitive with economies like China that thrive on the Orwellian surveillance and state-sponsored disregard for privacy.
To further complicate our policy debate, domestic calls for stronger data protection are taking place against the backdrop of proposals to mandate backdoors into encrypted communications systems designed to protect the very same data. These conflicting demands are destined to undermine each other. When regulators attempt to guide innovation, they risk confusing market forces and creating inefficiencies.
Will markets care about privacy?
Notably, Facebook’s preemptive steps to self-regulate are similar to corporate reaction to major data protection incidents of the past twenty years. Take Google’s call to hire thousands of security professionals after the massive Aurora attack, or Bill Gates’s decision to stop software development at Microsoft to focus on security and engage independent researchers to test every Windows product.
While all are great first steps, none fundamentally address the issue at hand: people and organizations should have a choice to either get a free service and give up their privacy or pay for assurance that their data is protected and is not accessed by anyone, including the service providers themselves. Significant enough penalties will ensure that companies offer these alternatives. When users have a choice, markets can decide if individuals and enterprises are willing to pay for the promise of privacy and security. And with demand, entrepreneurs will supply.
Regulators are closely watching whether the industry as a whole is doing enough to self-correct. GDPR, an upcoming overhaul of the EU data protection laws, has already raised the bar for processing personal information mainly due to its material penalties for non-compliance — up to four percent of annual revenue. For companies like Google, the fines can reach $4 billion. Following the Cambridge Analytica scandal, the bar will certainly be raised here in the US to avoid steep fines and lost market opportunities.
But unless we address the very nature of how most businesses operate, we will soon find ourselves in the same place, surprised by yet another incident. While this is a feature of today’s technology rather than a bug, this is no time to vilify Facebook. It’s time to demand and build a different, cleaner technology to protect our information.