Section 230: We Really Should Talk About It – OpEd

By

Virtually every progressive — and many people who are not at all progressive — are bothered by the ability of the rich to buy elections with their vast fortunes. Somehow, most of these people are not as bothered by the ability of the rich to control the media, which almost certainly allows them to have even more influence on political attitudes and elections. And even fewer seem bothered by the control of the rich over the massive social media platforms that are the main source of information for a growing share of the population.

No one expects serious thinking from our great thinkers (people who fill the pages of outlets like the New York Review of Books, Atlantic magazine, New York Times, etc.) but the rest of us really do need to give these issues serious thought. In particular, we really do need to focus on the problem that someone, like our crazed chainsaw wielding co-president Elon Musk, can do whatever he likes with a massive social media platform.

To be clear, I was troubled by the extraordinary power held by owners of the huge social media platforms even before Elon Musk bought Twitter. Even if Twitter was a more open space before Musk swooped in, it was still problematic that such a massive platform was under the control of whatever rich person or people owned it. The same was true of Facebook. The problem obviously became much worse once Twitter was bought by someone committed to using it to advance his far-right agenda.

Some have argued that we need government control of these platforms. I have never been in that camp, for reasons that should be obvious today. We don’t want Donald Trump running Twitter and Facebook.

Instead, I would like to see these platforms downsized. Reforming Section 230 can prove a way to get there.

The issue at stake is the provision that protects social media platforms from liability for third party content. This means that, unlike print or broadcast media, the huge platforms cannot be sued for defamatory material posted by individuals, groups, or corporations.

This is true even if it was paid advertising. That means Elon Musk can directly profit from wholesaling defamatory material through selling the ads but bears no financial risk due to the damage it causes to others. It’s worth noting that even when he is not directly paid, he also benefits financially if the defamatory material increases his audience, since this means that advertising on his platform is more valuable.

Other media do face serious consequences for spreading defamatory material. The Dominion lawsuit against Fox over spreading lies about the 2020 election was largely over third-party content. Fox argued that their paid employees were not the ones lying about Dominion, but rather the guests they featured on their shows. Nonetheless, they had to cough up $787 million to settle the case.

Similarly, the famous Times v. Sullivan case, that established the higher standard of defamation for public figures, was based not on anything the New York Times wrote itself. Rather, the lawsuit stemmed from ad taken out by a civil rights group.

If either of these issues arose with Facebook or X, there would not even be the beginnings of a case because of Section 230. People can endlessly push lies about Dominion or any other corporation connected with running elections on X, and Elon Musk faces zero potential liability. The same story applies to defamatory ads about politicians. There is no obvious logic to this asymmetry.

Many people will say that the victims of defamation can still sue whoever actually developed the content. There are two problems with this argument. First, the person who developed the content may not have much money.

Every lawyer knows when they bring a suit they want to go after the deep pockets. They sue the insurance company, not the drunk driver who is about to file for bankruptcy. If Elon Musk profited from the material he should bear liability.

The other problem is that it is not always easy to identify the person who does a post on social media. One of the major right-wing posters on X goes under the name “Catturd.” (In a prior round on this issue, one critic proudly told me that he knew who Catturd was.) Since social media platforms can allow people or organizations to hide their identity, it may not be possible to even identify the person who developed the content.

It also should be clear that it matters hugely that the defamatory material is amplified by a social media platform. One person standing on a street corner yelling about how a restaurant gave his whole family food poisoning is not likely to do much damage to the restaurant’s business. Millions of people reading the story about food poisoning on Facebook will. There is a reason the law holds media outlets responsible for spreading defamatory material.

Defamation Rules for the Internet

Rules on defamation had to be modified for broadcast media, since there are obvious ways in which it is different from print media. The same holds true with the Internet. It is not realistic to expect social media platforms to monitor everything that is posted for defamatory material as it is posted. However, they can respond to takedown notices from people claiming defamation.

There is already a model for this sort of takedown practice. The Digital Millennial Copyright Act (DMCA) requires Internet sites to promptly remove material that is infringing on a copyright in order to protect themselves from liability. The DMCA has been the law for more than a quarter century.

While there are problems with the DMCA, most obviously the problem of over-removal where sites take down material that is not actually infringing in order to reduce risk, it has not shut down the Internet. There can be similar problems with allegations of defamation, but it is likely to be less of an issue.

Copyright law provides for statutory damages. This means that a website can be forced to pay thousands of dollars, or even tens of thousands of dollars, in damages when the actual damages from the infringement are just a few dollars or even a few cents. The law on defamation is not remotely as sympathetic to plaintiffs claiming defamation, especially when the person is a public figure making the standard of proof considerably higher.

Using Section 230 as an Equalizer

The reason we get huge social media platforms like Facebook, X, and TikTok is there are enormous network effects. People want to be on these huge platforms because everyone else is on them. If you hope to reach a large group of people with your postings, then you have a strong incentive to be on one of these platforms.

The network effect is an intrinsic feature of the technology. However, this does then create the problem of who controls a huge platform. Who decides what material is banned, and probably more importantly, who decides which material gets promoted to millions or hundreds of millions of users. This is why these platforms are so problematic for people who believe in democracy.

A restructuring of Section 230 provides one way in which we can look to offset the network effect. I have proposed that we repeal Section 230 protection against liability for defamation only for sites that carry advertising or sell personal information. That would mean all the huge platforms that dominate social media now would lose their protection. However, smaller sites that rely on either donations or subscriptions would still enjoy the protection Section 230 now provides.

To be clear, I know that many smaller sites do have some advertising. This change would mean that if they wanted to continue to have Section 230 protection, they would have to drop the advertising.

This could lead to some going out of business. That would be unfortunate, but as a practical matter we don’t have many policies that actually have an impact in the world that don’t have some negative effects. If that is a basis for nixing policies, we will not be able to accomplish much in the world.

Would this change lead to a mass exodus from Twitter and Facebook? I don’t have an answer for that. I have been told very confidently by people who know the Internet much better than me that this change would either mean nothing to the huge sites (they would just hire more lawyers) and also that it would force them to adopt a subscription model where people had to pay to use their sites.

For my money, I would be happy to see the experiment. As I have outlined above, I can see no reason why social media sites should enjoy a greater protection against defamation lawsuits than print or broadcast media.

And people really can be harmed by lies spread on social media. If some racist posts on their Facebook page that a restaurant owned by Asian Americans gave his family food poisoning, or even worse buys an ad, that restaurant may lose a large amount of business.

And any number of people have been absurdly dubbed as pedophiles by right-wingers who don’t like their politics. If a remotely believable claim was ever pushed, the person wrongly accused of being a pedophile could see their life ruined. And Elon Musk would face zero liability.

I don’t see any justice in that situation. For that reason, I can’t see the harm in taking away Section 230 protection for these giant platforms.

My hope is that this change will result in the platforms being seriously downsized so we don’t have to be so worried about the enormous power held by the people who control them. But if it doesn’t have that effect, I still think the change in defamation law will be in the right direction. We would then just need to find some other way to keep Elon Musk from running the country.

  • This article first appeared on Dean Baker’s Beat the Press blog.

Dean Baker

Dean Baker is the co-director of the Center for Economic and Policy Research (CEPR). He is the author of Plunder and Blunder: The Rise and Fall of the Bubble Economy.

Leave a Reply

Your email address will not be published. Required fields are marked *