In this time of social distancing, working from home and school closures, people and businesses are relying on the internet more than ever to engage with friends, family, clients, consumers and the public at large. Social media and content-sharing websites are providing individual communities and the entire nation with 24/7 accessibility to forums for public discourse, communication and the dissemination of news.
Amidst this backdrop, the Department of Justice (DOJ) has recently released a report which addresses the publication of illicit material on online platforms, which are increasingly accessible to children and criminals alike. In an attempt to ensure that the internet is an “open and safe” space for society, the DOJ has put forth recommendations to lawmakers that attempt to limit the broad immunity provided to online platforms under Section 230 of the Communications Decency Act of 1996.
Section 230 protects online platforms from civil liability for both (a) content posted on their sites by individual users and (b) for the site’s removal of certain content. In a press release announcing the recommended reforms, the DOJ stated that technological advances and judicial statutory interpretation since the statute’s enactment have “left online platforms unaccountable for a variety of harms flowing from content on their platforms and with virtually unfettered discretion to censor third-party content with little transparency or accountability.”
The DOJ issued a report which identified four general categories of reform to Section 230:
- Incentivizing Online Platforms to Address Illicit Content While Preserving Immunity for Defamation. The DOJ recommends that a platform cannot invoke Section 230 immunity if it “purposefully facilitates or solicits” illegal or illicit material published on its site by a third-party user, such as terrorism, drug trafficking, child exploitation or cyberstalking. Additionally, immunity would not apply to a platform that (a) had “actual knowledge or notice” that the content violated federal criminal law or (b) was provided with a court judgment otherwise deeming the content unlawful.
- Federal Government Capabilities to Address Unlawful Content. The DOJ also recommends that Section 230 immunity does not apply to civil enforcement actions brought by the federal government to protect citizens from harmful and illicit conduct.
- Promotion of Competition. The DOJ proposes clarification that Section 230 does not apply to federal antitrust claims in order to prevent large dominant platforms from invoking immunity where liability is based on harm to competition, not on third-party content.
- Promoting Free and Open Discourse and Encouraging Greater Transparency Between Platforms and Users. The DOJ suggests clarifying specific text in the statute. In particular, the DOJ proposes limiting immunity for platforms that remove content based on the “otherwise objectionable” language in the statute. Instead of lending such broad discretion to platforms, the Department proposes replacing “otherwise objectionable” with “unlawful” and “promotes terrorism.” It argues that this change in language would limit a platform’s ability to remove content arbitrarily and primarily reserve immunity for platforms which are focused on reducing content harmful to children. Additionally, the Department proposes adding a statutory definition of “good faith,” which would provide immunity to platforms that remove content “in accordance with plain and particular terms of service and accompanied by a reasonable explanation, unless such notice would impede law enforcement or risk imminent harm to others.” This change is meant to encourage platforms to be more transparent and accountable to their users regarding the site’s posting policies and restriction of access to certain content.
In the proposal’s announcement, Attorney General William P. Barr said, “[T]hese reforms will ensure that Section 230 immunity incentivizes online platforms to be responsible actors … to make certain they are appropriately addressing illegal and exploitive content while continuing to preserve a vibrant, open, and competitive internet.”
These recommendations come just a month after President Trump signed an executive order to limit the legal protections provided to social media companies under Section 230. CBS reported that a DOJ official said that the Department’s review of Section 230 which led to the proposal took place over the last 10 months and that the proposed recommendations include those mentioned in the president’s executive order.
However, critics of the proposal such as Aaron Mackey, a staff attorney at the Electronic Frontier Foundation, are calling it dangerous and a government weapon to retaliate against online services they dislike. He says the proposal will “allow public officials or private individuals to bury platforms in litigation simply because they do not like how those platforms offer their services.” Additionally, he says that revoking a platform’s discretion to remove harmful material (including spam, malware and other offensive content), even if it isn’t illegal, could make users’ experiences worse and less safe.
The Internet Association, a trade association which represents leading global internet companies on matters of public policy, also rebuked the proposal. “The threat of litigation for every content moderation decision would hamper IA member companies’ ability to set and enforce community guidelines and quickly respond to new challenges in order to make their services safe, enjoyable places for Americans, ” said Jon Berroya, the association’s interim president.
Ultimately, the fate of the Department’s proposal rests entirely in the hands of Congress, where some members, both conservative and liberal, have criticized Section 230’s current applicability and usefulness. Whether the proposed reforms to limit protections for online platforms will be implemented remains to be seen.
Section 230 and Keeping the Trolls at Bay: Twitter Obtains a Significant Legal Victory on Content Control
Trolls and Consequences: A Racially Motivated Doxing and Social Media Assault Is Ruled a Compensable Offense
The “Commander-in-Tweet” Returns: When a Social Media Account Creates a Public Forum, Critics Get to Stay