Close
Updated:

The Misinformation of Capitol Hill: Section 230 and the Weaponization of Social Media

Social media has experienced an unprecedented growth in popularity and usage since its inception. This is owed in large part to Section 230 of the Communications Decency Act. Unlike their print counterparts, internet publishers enjoy an increased level of freedom and immunity under Section 230 for the content they publish. It is Section 230 that gives social media companies, large and small, the ability to manage or host third-party content without fear of lawsuit. As intermediaries, Section 230 ensures these companies will not be liable for filtering decisions that allow them to establish their own standards and delete or modify content they consider obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable—regardless of constitutional protections. But Section 230 also protects them if they decide not to filter such content. However, recent events suggest that Section 230’s until now all-encompassing shield could soon be less so.

Section 230 has long allowed social media platforms to publish and host content—no matter whether controversial, unpopular or even outright false—with little concern for liability. This level of immunity has earned Section 230 the ire of legislators, and proposed amendments addressing this immunity have been presented but never adopted. However, the increasing presence and impact of social media-based misinformation campaigns—as well as recent events seemingly exacerbated or even enabled by such campaigns—may have finally created the legislative momentum previous efforts have lacked.

While misinformation campaigns have long been used to influence the public’s view, the internet and especially social media have amplified their ability to quickly galvanize large groups and unite them behind a singular cause. The misinformation that fueled the Capitol riots—perhaps the most notorious and flagitious use of social media to date—found fertile ground on powerful social media platforms. For weeks, misinformation spread unfiltered and unconstrained across various platforms, reaching a fever pitch on January 6.

In the wake of the attacks, calls for accountability have given rise to calls for reform. The intermediaries whose platforms served as incubators for misinformation came under fire because many believed they permitted the dangerous misinformation to spread and incite the riots though they were in a unique position to stifle and/or control its spread. By not acting, they maintained Section 230 immunity and avoided liability, but shed light on Section 230’s difficulties.

Democrats have started the push with the introduction of the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act. The SAFE TECH Act would carve out exceptions to Section 230’s immunity and allow internet intermediaries to be held accountable for harmful or criminal behavior facilitated by their platforms, especially cyberstalking, targeted harassment, and discrimination. And, in cases where an intermediary or platform has accepted payment for hosting or publishing content, Section 230 would be inapplicable fully exposing them to liability for the content’s harmful consequences.

Republicans have taken aim at Section 230, as well, after several social media platforms actively moderated their content and banned a number of conservative firebrands considered complicit in the buildup and aftermath of the riots. As much as Section 230 shields intermediaries and publishers from liability for their inaction, it also frees them from any obligations to remain neutral in the views they host, politically or otherwise.

With 2021 ushering in a new administration and Section 230 firmly within both parties’ cross hairs, change seems inevitable. Though rooted in opposing points of view, the push for change to Section 230 seems to be a point of bipartisan agreement. The rise of misinformation campaigns suggests there may be good reason to revamp and modernize Section 230. In 1996, when Congress enacted Section 230, the internet was in its infancy and moderation needs were quite different. Now, in the face of significant change, compliance and risk management solutions will have to be revised and updated to account for any increased exposure to liability. As the legal landscape develops, it will present an ongoing obligation for internet-based companies to revise and recondition their publication and content moderation standards and either invest in their monitoring efforts or cease hosting user content altogether. In any event, companies should stay diligent and prepare for significant change in the near future.


RELATED ARTICLES

EARN IT Act Amendments Could Shift Section 230 Protection from DOJ Guideline Compliance to Post Hoc Enforcement Regime

Social Distancing, Social Media and Section 230: DOJ Calls On Internet Companies to Provide Safer Online Communities

Twibel Warfare: To Retweet or Not to Retweet Is Still the Question