Celebrate King's Day with TNW 🎟 Use code GEZELLIG40 on your Business, Investor and Startup passes today! This offer ends on April 29 →

This article was published on June 28, 2019

Is Facebook really the corporate monster everyone wants it to be?

As the world's largest social network, it can be difficult to separate Facebook's issues from the community.


Is Facebook really the corporate monster everyone wants it to be? Image by: Oscar Delgado / Dribbble

The ethics of Facebook, as a corporation, have come into question lately. Though criticisms of the platform have stretched back to its inception, the incident that inspired the most recent waves of ire seems to be the Cambridge Analytica data scandal of early 2018, in which it was revealed that a political consulting firm harvested data on millions of people without their knowledge or consent.

Since then, there have been multiple waves of the #deletefacebook hashtag trending and millions of people leaving the platform, especially those between the ages of 12 and 34. Criticisms have ranged from serious lines of inquiry about the ethics of the company overall to mindless regurgitations of memes that Mark Zuckerberg is secretly a robot.

Either way, it seems the general population is happy to accept the idea that Facebook is an unethical corporate monster that needs to be stopped.

This is, at best, an oversimplification, and at worst, the latest example of recreational outrage seeking targets for the sake of seeking targets.

The lack of historical precedent: the dilemma of ambiguity

facebook libra

To start, there are some highly valid criticisms of the way Facebook has conducted operations. But these need to be grounded in the right context; criticism can be highly productive when it’s focused on fixing an existing problem; but instead, much of the criticism against Facebook is focused on vilifying the company explicitly. It’s hard to substantiate this accusation, especially because Facebook is the first company of its kind, and therefore the rules for conduct need to be flexible.

If Facebook were the first of a long line of social media companies that set a precedent for how to operate ethically, and Facebook deviated from those norms, there would be grounds for a complaint. As an obvious example, we can tell that a company like Enron operated unethically not only because it clearly broke the law, but because it deviated from revenue reporting standards that have been established by publicly traded companies for decades.

Many lines of criticism against Facebook originate from the ambiguity of expectations regarding how it should be operating. For example, is Facebook a social media platform or a data company? If it’s a data company, to what standards of data privacy and security should it be held? This is a relatively new type of company, so neither laws nor norms have been established to dictate how it should act; by definition, it’s operating in an ethical gray area because “black” and “white” haven’t clearly been established.

The same ambiguity is present when you think of Facebook as operating in the middle ground between publisher and platform; should it be held liable for the content shared using its app? If so, we’d have to consider it a publisher and hold it accountable for all content shared on the site. If it’s just a platform, then it shouldn’t be exercising selective censorship of people and materials. Things get complicated fast.

Facebook is stuck in the complex position of trying to please everyone, no matter how they might categorize or interpret the company. With no clear, firm standards in place, it’s impossible to fully vilify or glorify the company. We’re still figuring things out.

The ethics of unforeseen consequences

To what degree should a company be held liable for the unforeseen consequences of its actions? It’s a major point of contention among legal scholars, for individuals and corporations alike, but it’s worth considering in Facebook’s case.

For example, take the Cambridge Analytica scandal. Facebook didn’t sell user data to CA. CA didn’t hack Facebook because of lax security standards. Instead, CA made use of a third-party app called This Is Your Digital Life, which required users to grant permission to use their data for academic purposes. This is problematic for two main reasons; first, it misled users about how their data was to be used, and second, because of Facebook’s design, CA also gained access to information about those users’ extended networks.

This, hypothetically, could have been prevented had Facebook been more vigilant about policing the types of apps available on its platform. But that open nature is part of what has made the platform so enjoyable to use. Facebook didn’t operate nefariously in this incident; instead, undesirable consequences came as an indirect result of Facebook’s infrastructure, and with no previous case studies to set a precedent for this kind of manipulation, it’s hard to find the company directly responsible for the outcome.

The ignorance of amends

Partially in response to all the accusations and scandals arising from its practices, Mark Zuckerberg has publicly stated his desire to do better. The company has doubled down on the importance of privacy in its users’ communications. It has updated and revised its privacy standards. It has started to take a more active role in censoring the content on its platform. It’s even unveiled a total redesign to show users the steps the platform is taking to become more trustworthy. Whether you see this as a response rooted in sincere regret or shameless, desperate pandering, the end result is the same; Facebook is trying to do better, and at the end of the day, isn’t that the best-case scenario? After all, there’s no undoing what’s already been done. Instead of judging people (and companies) by what they’ve done before, we need to be paying attention to how they respond to the mistakes of the past—and here, Facebook seems to be doing well.

The optional nature of Facebook

Credit: Anthony Quintano / Flickr

 

Facebook has also been described as being too big or too powerful, or as a monopoly that needs to be broken up for the good of consumers. More accurately, Facebook could be called a monopsony, but even so, it doesn’t carry the ethical weight of a traditional powerful corporation because users have a choice in how they use Facebook (and if they use Facebook) at every level. Facebook isn’t a fundamental human need, like access to clean water or (to an extent) electricity. Even if it was, there are dozens of social media platforms that provide something very similar. And if you do use Facebook, you have control over how your data is used, how it’s seen, and which third-party apps have access to that information. In exchange for using the platform, for free, you’re providing Facebook with some data that it can then use as it sees fit; this is also transparently outlined in Facebook’s data privacy policy, which any user can read at any time.

Facebook is obviously operating in some contentious and ambiguous ethical territory, and some of the decisions it has made in the past decade are somewhere on the spectrum between shortsighted and dumb. But to describe Facebook as a giant, evil, corporate monster is overly simplistic and negligent of the nuances of laws and ethics as they relate to tech companies. This is complex territory, and resorting to oversimplifications isn’t going to make things any clearer.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with