A Firm Debate: Section 230

The SAFE TECH Act seeks to amend Section 230: Jayaram Law Debates the Issue

The opinions presented in this article do not necessarily reflect those of the law firm, the moderator or the individuals taking each position.

Section 230 of the Communications Decency Act of 1996 (hyperlink: https://www.law.cornell.edu/uscode/text/47/230) provides sweeping immunity to platform providers against specific content hosted on its site. When the Communications Decency Act (the “CDA”) passed, internet platforms were aghast until free-speech advocates were able to secure Section 230 which broadly limits liability to the platforms and thereby allows them to host the myriad opinions without concern for personal liability. You might have noticed advertisements from Facebook and other platforms presenting opinions of the latest act which is looking to curtail Section 230. Herewith is a brief debate of the SAFE TECH Act (hyperlink: https://www.congress.gov/bill/115th-congress/house-bill/4561) presented to the Senate by Senators WARNER (D-VA), HIRONO (D-HI) and KLOBUCHAR (D-MN).

Aaron Ogunro: In favor of the SAFE TECH which seeks to repeal some of the immunity enjoyed by platform providers and allow plaintiffs to penetrate Section 230.

Wendy Heilbut: Moderator

Wendy Heilbut: Could you each give us a little background on your position and what the Act seeks to change?

Aaron Ogunro: Online and social media platforms are great resources that incentivize free speech. However, these platforms are in the best position to regulate the illicit content that exists on their platforms, and the SAFE TECH Act is a step in the right direction in holding these online platforms accountable.

: The internet we know and rely on today would not exist without Section 230 of the CDA. The statute effectively protects online intermediaries, particularly social media platforms, from being held responsible for what individual users say and do. Regardless of lawmakers’ intent, the SAFE TECH Act would remove this imperative immunity for an extensive amount of content, and force providers to censor lawful speech.

WH: Aaron, what does the SAFE TECH Act aim to do? How will it empower the platforms?

AO: The SAFE TECH Act would limit Section 230 immunity in three main areas:

First, Section 230 immunity would not apply to ads or other paid content. In other words, platforms will be on the hook if ads or other paid content on their platforms contain defamatory, false, or misleading content.

Second, certain laws will be carved out of Section 230 immunity, such as civil rights laws, cyber stalking and harassment laws, antitrust laws, and international human rights violations.

Lastly, the SAFE TECH Act would make sure that Section 230 does not bar injunctive relief when misuse of the platform is likely to cause irreparable harm.

WH: Interesting and it is helpful to understand the limitations and parameters. But Aaron, how does the SAFE TECH act allow platforms to be the arbiters of what is “safe” and what is “misleading”?

AO: For clarity’s sake, the SAFE TECH Act, as it is currently drafted, does not mention defamatory, false, or misleading content, but that is a logical next step if they are responsible for ads and content on their platforms. What we do know is that if platforms are forced to determine what is false or misleading, there may be issues surrounding the First Amendment. There is some respite for platforms when it comes to complying with injunctions, however, because platforms will not be subject to liability if they comply with an order granting injunctive relief.

WH: Good to know Aaron, but are we comfortable putting more pressure on platforms to make the call on who can/cannot “speak” on their platform? By subjecting the platforms themselves to more liability, aren’t we are saying they should keep certain speech off their platforms and protect themselves from liability?

DF: One hundred percent!

WH: Okay Danielle, but if we allow platforms to continue to receive full immunity under Section 230, who will make sure proper safeguards are in place on the platforms?

DF: Platforms currently have algorithms and content removal practices in place to safeguard against known illegal and illicit activity. One alternative to repealing or reforming Section 230 may be a set of standards and practices or guidelines determined upon by major players in the space to hold each other accountable for certain activity. What we don’t want to happen, and what is a real possibility with the SAFE TECH Act, is for the government to step in and “chill” certain speech – meaning, publishers will over-compensate to make sure they are in compliance; or, arguably worse, compel platforms to keep content or users off of their platform, effectively reinstating some internet version of the fairness doctrine.

AO: Wendy, it is not really a matter of “who” will set-up the safeguards on the platforms, it is more a matter of “what.” And even when it comes to the “what,” the proposed law is saying that you are free to have whatever content you want, but you are now going to be held responsible if that content goes above a certain threshold. I do think your point is valid though regarding platforms limiting the type of speech they host. You may see platforms also limiting third-party content entirely, but these are speculative predictions and are based on the enforcement mechanism, which still remains unclear.

DF: Aaron is right that the proposed legislation says that you are free to have whatever content you want, but you are now going to be held responsible if that content goes above a certain threshold. The problem is that whenever you have a regulation that requires companies to police content, it means the regulation on its face requires the platform to review the content – this in and of itself is not a content neutral regulation, and therefore must pass strict scrutiny*, otherwise it violates the First Amendment. Is the government really in a better position to make decisions on what content we as users put in the ether than the owners of the platforms themselves?

* for non-lawyers, strict scrutiny is a high standard used by courts to review a regulation and determine how limiting it is on personal freedoms.

AO: Without a doubt, the First Amendment implications will be substantial. However, private platforms can judge what content will be allowed on their platforms because they are generally not subject to the First Amendment since they are private and not public places of speech. However, recently courts have said that platforms (for example, Twitter) can be public platforms, and therefore, their ability to limit content may cause First Amendment issues down the road.

DF: Absolutely! These companies are not subject to the First Amendment! The First Amendment gets implicated when you have a regulation, like the SAFE TECH Act, where a government body tries to impose regulations on speech within private entities. Forcing platforms to judge what content will be allowed or, in other words, requiring private entities to regulate certain types of speech that the government body has deemed “misleading,” “unsubstantiated,” “false,” or “inappropriate” is government censorship in its truest form.

WH: Thank you both.