Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on June 16, 2020

What does President Trump’s ‘crackdown’ on Twitter do?

Not much. But he's not the only one advocating change


What does President Trump’s ‘crackdown’ on Twitter do? Image by: CleanPNG (edited)

President Donald Trump has long accused social media companies, particularly Twitter, of silencing conservative voices, but he was particularly incensed when the platform, for the first time, flagged one of his tweets late last month. To a post that promoted untrue claims about mail-in ballot fraud, Twitter added a small exclamation mark and warning, “Get the facts about mail-in ballots,” that linked to a page fact-checking the president’s statements. The president responded, two days later, with an executive order targeting Section 230, the piece of federal code social media companies more or less live by when it comes to moderating content on their sites.

The law allows Twitter to regulate, flag, and remove content posted by anyone, even the president.

Many legal experts criticized the president’s move as a violation of the First Amendment and unenforceable because it cannot alter a law enacted by Congress. It was also pretty toothless—the order could only suggest that the Federal Trade Commission and the Federal Communications Commission “should” go after tech companies for bias, since the president has no authority to order those independent bodies to do so.

Yet while the legality of and motivation behind the executive order are problematic, across the political spectrum lawmakers and academics say that the statute has issues that need to be addressed.

What is Section 230?

In 1996 two congressmen—Ron Wyden, a democrat from Oregon and Chris Cox, a republican from California—wrote Section 230 of the Communications Decency Act. Congress dreamed up the CDA as an early, some say “panicked,” attempt to regulate pornography on the internet and, particularly, to keep children from stumbling across pornographic images online. But Wyden and Cox wanted to make sure any regulation wouldn’t stifle the internet’s growth and innovation.

In a recent discussion hosted by the Aspen Institute, Wyden, now a U.S. senator, described Section 230 as a “sword and a shield.” The law shields websites from liability for the content users post, so that if someone writes a defamatory Yelp review or posts something untrue on Twitter, it’s the user who could be sued, not the platform. And it gives the platforms a “sword” by allowing them broad leeway to take down anything they deem offensive.

Section 230 has been heralded as “the ‘Magna Carta’ of the internet” and the “most important law protecting internet speech.” Proponents of the law say it has allowed the internet to flourish, making sites that rely on user-generated content, like Facebook, Yelp, Reddit, and YouTube, possible. If these sites were liable for users’ posts, platforms would be overly cautious, refusing to host any content that could get them sued. Instead, the internet has allowed many communities to thrive and has amplified the voices of LGBTQ+ advocates, BIPOC movements, women, and those who otherwise wouldn’t have the resources to project their message to millions of others.

“Without 230, it’s a certainty that not a single #metoo post would have been allowed on moderated sites,” said Wyden during the Aspen Institute call. And he argues Section 230 helps diversify the internet by helping smaller platforms that can’t afford to build elaborate filtering systems to moderate content.

But Section 230 has also been the center of intense scrutiny from both liberals and conservatives, who complain that tech companies use the law to, on one hand, suppress users’ free speech and, on the other, absolve themselves of any responsibility for the content on their sites. These critics have identified two main problems with Section 230.

Problem one: Tech companies set their own rules for acceptable speech 

While Section 230 shields platforms from liability for their users’ posts, social media sites are not bastions of unfettered free speech. Companies like Twitter and Facebook actively meddle in content, from elevating certain posts over others, to burying others, to flagging or removing content or users altogether.

Daphne Keller, director of the Stanford Cyber Policy Center’s Program on Platform Regulation, writes that social media platforms have an “unprecedented technological capacity to regulate individual expression. Facebook and other large internet companies can monitor every word users share and instantly delete anything they don’t like. No communications medium in human history has ever worked this way.”

President Trump’s tiff with Twitter centers on his belief that the platform discriminates against conservative voices, particularly his own. Critics of the president, however, complain Twitter has wrongly held back on flagging or even removing Trump’s more untruthful and incendiary posts.

Twitter responded to the executive order with a tweet calling it “a reactionary and politicized approach to landmark law.” The response also went on to laud Section 230, saying it “protects American innovation and freedom of expression,” and that “attempts to unilaterally erode it threaten the future of online speech and Internet freedoms.”

Facebook has taken a different approach. CEO Mark Zuckerberg announced that Facebook and Instagram would not label the president’s inflammatory posts about protests in Minneapolis, spurring employees to stage a virtual walkout. In a note that he later made public on Facebook, Zuckerberg wrote, “[W]e will continue to stand for giving everyone a voice and erring on the side of free expression in these difficult decisions.” He also promised that the company will review its policies on voter suppression and on content that threatens state violence.

In an email to The Markup, Facebook spokesperson Andy Stone said the company believes altering or repealing Section 230 would be a mistake. “By exposing companies to potential liability for everything that billions of people around the world say, this would penalize companies that choose to allow controversial speech and encourage platforms to censor anything that might offend anyone,” he wrote.

Prager University, a conservative nonprofit, and a group of LGBTQ+ content creators have separately sued Google/YouTube for bias after the platform labeled their videos “restricted,” a designation that makes the content harder to find and is meant to help parents keep their kids away from extreme content.

Increasingly, scholars worry that meddling has less to do with making platforms safer or freer and more to do with elevating the companies’ commercial interests. In 1996, when the law was passed, the internet was largely chat rooms run by small startups. Now, tech companies like Google and Facebook have become information gatekeepers that have huge control over what information users see and how it’s organized.

Stone said Facebook actually changed its feed in 2018 to promote content from friends and family over posts from businesses and news sites.

“I don’t think we know enough about the details of commercially driven ranking and promotion or demotion of content,” said Marietje Schaake, policy director for Stanford’s Cyber Policy Center during a panel discussion about Trump’s executive order. “A lot of the decisions companies make every day are not very clear or accountable.”

Problem too: Guns, stalkers, and child predators

Section 230 is meant to protect discourse, but online it can be hard to distinguish where speech ends and action begins. Tech companies have used Section 230’s immunities to protect themselves from liability against accusations of discriminatory advertising, illegal activity, and harassment taking place on their platforms.

When housing advocates sued Facebook for violating the Fair Housing Act by allegedly allowing housing, credit, and employment advertisers to target or exclude certain racial groups, Facebook used Section 230 as a defense. The case settled out of court, and Facebook agreed to change its system.

Backpage, a classifieds website that frequently hosted advertisements that featured child prostitution, used Section 230 to protect itself from liability for years before federal law enforcement shuttered the site in 2018.  Facebook and its subsidiary Instagram tried a similar defense when a Texas lawyer sued the companies for allegedly allowing pimps to operate on their platforms, allegedly luring children into prostitution. (That case is ongoing.)

Armslist.com, which describes itself as a “firearms marketplace,” allows people to sell guns without a license and to sell to buyers who might not pass mandatory background tests. In one case, Radcliffe Haughton, who was the subject of a restraining order forbidding him to own a firearm, purchased a gun on Armslist and used it to kill his estranged wife and two of her coworkers, according to court papers filed in the subsequent lawsuit against Armslist. The Wisconsin Supreme court ruled that, because of Section 230’s protections, Armslist was not liable.

“Invoking Section 230 to immunize from liability enterprises that have nothing to do with moderating online speech, such as marketplaces that connect sellers of deadly weapons with prohibited buyers for a cut of the profits, is unjustifiable,” wrote legal scholars Danielle Citron and Mary Anne Franks for a forthcoming article.

Similar issues come up with sites like Grindr, where features like geolocation can facilitate harassment offline. According to court papers filed in a lawsuit against Grindr, a user named Oscar Juan Carlos Gutierrez allegedly created a fake account to impersonate his ex-boyfriend, Matthew Herrick. Gutierrez allegedly sent more than a thousand men to Herrick’s apartment and office, all of whom thought Herrick had invited them for sex. Herrick alleged that he repeatedly petitioned Grindr to remove the account but the company did nothing. But when Herrick sued, the case was thrown out based on Section 230 protections.

There’s also worry among legal scholars that women and people of color are the ones who suffer most when online harassment and stalking aren’t properly regulated. Franks and Citron have argued that when tech companies fail to control behavior like cyber mobs and nonconsensual pornography on their platforms, it interferes with equal access to online spaces. “There are no civil rights without cyber civil rights,”  they wrote in a Harvard Law Review blog post.

Is there a solution?

Proponents of changing or replacing Section 230 have struggled with the same problem regulators had back in 1996: how to dampen the worst of social media while protecting the rest.

In 2018 Congress passed two laws, Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA), which amend Section 230 and make websites that enable sex trafficking liable in both civil and criminal cases. After the law passed, many platforms did start to moderate certain content more actively. Reddit banned multiple forums, including r/escorts and r/SugarDaddies, and Craigslist shut down its personals listings, but so far there have been no high-profile prosecutions. (Craigslist founder Craig Newmark is a funder of The Markup.)

And many sex workers say the law has done the opposite of what it was intended to do. Now, sex workers can no longer use the internet to safely find and vet clients, or to share information among themselves about clients who could be dangerous, according to a study conducted by the advocacy group Hacking//Hustling.

In March, a pair of bipartisan senators, Lindsey Graham and Richard Blumenthal, introduced the EARN IT Act, aimed at preventing online child exploitation. The law would compel companies to comply with certain “best practices” or risk losing their Section 230 protections. But critics say the law violates users’ privacy and free speech. The Electronic Frontier Foundation, a nonprofit that advocates for privacy online, characterizes these “best practices” as first amendment violations.

Other approaches have focused on content moderation, rather than on preventing specific crimes. In 2019 Republican senator Josh Hawley proposed an amendment to the Communications Decency Act that would require content moderation decisions to be politically neutral if companies wanted Section 230 protections. But critics say the law is overly vague. “It assumes there is such a thing as ‘political neutrality’ and that the FTC can define and enforce what that is,” tweeted Stanford’s Keller.

Democratic congressman Adam Schiff and senator Mark Warner have both suggested Section 230 immunities should be limited, particularly when it comes to hosting deep fakes or misinformation, but neither lawmaker has proposed legislation yet.

Changing or repealing Section 230 is tricky because it protects some of the best and worst parts of the internet. As legal scholar Eric Goldman wrote, when we focus on the nastiest tendencies of the internet, we’re more likely to want to change Section 230. But if we focus on the internet’s good contributions to society, we’re less apt to change the law.

This article was originally published on The Markup by Sara Harrison, and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

Originally published on themarkup.org

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with