The 2020 election will be unprecedented in many respects. More people will be voting by mail, and there will likely be more democratic participation online than ever before. Internet platforms and communication services will become more influential forums as people are restricted from in-person conventions and debates. But even before the pandemic pushed these operations online, some lawmakers were already seeking to monitor misinformation in political discourse on the internet, specifically in the context of deepfakes. Deepfakes are manipulated photo, video or audio clips generated by computers, often with the assistance of artificial intelligence algorithms. They can make someone appear to say or do something that never actually happened or create realistic images of people who do not exist.
In 2019, California and Texas both passed legislation prohibiting the use of deepfakes in elections. Texas passed the first law criminalizing deepfakes in September with Senate Bill 751. The legislation bans publishing and distributing deepfake videos “with intent to injure a candidate or influence the result of an election,” within 30 days of an election. California’s A.B. 730, signed into law by Governor Newsom on October 3, provides civil remedies and prohibits the creation and distribution of “materially deceptive” audio or visual media of a candidate within 60 days of an election.
Both laws target the dissemination of media intended to deceive the public ahead of an election. The California legislation defines the prohibited media as manipulated content that would falsely appear authentic to a reasonable person and that would cause them to have a “fundamentally different understanding or impression of the expressive content” than the unaltered original. It also provides an exception for deepfakes posted along with a disclosure to the public, with the disclosure criteria varying with the type of media. Conversely, the Texas law does not provide for disclosure exceptions, and holds publishers liable for violations. This could potentially allow social media platforms to be held liable for deepfakes posted on their sites, although such liability would seem to be preempted by § 230 as it is currently worded.
While the laws in Texas and California have been lauded as important efforts in addressing the proliferation of fake and manipulated news in U.S. elections, some legal experts have questioned the enforceability of the laws, arguing that efforts to ban deepfakes cross into protected First Amendment territory. The U.S. Supreme Court has previously protected knowingly false speech, striking down a law that criminalized false claims of military honor in U.S. v. Alvarez. Underscoring free speech concerns, both the California News Publishers Association and the American Civil Liberties Union opposed California’s legislation.
Some have voiced different concerns about the laws’ enforceability, highlighting the difficulty of catching overseas creators and noting that many deepfakes are produced outside of the United States. Others have claimed that the laws will be ineffective given the speed with which deepfake technology improves. Experts in technology are divided over whether detection abilities will ever catch up with creation technology.
Despite these concerns, both laws remain on the books heading into the 2020 election. And the Texas law has already come up in a high-profile case: Houston Mayor Sylvester Turner called for the district attorney to investigate opponent Tony Buzbee’s campaign over a television ad showing edited photos of Turner and alleged fake texts sent by the mayor.
Although both laws are specific to deepfakes in politics, they will provide important lessons for legislators thinking about how to deal with technology and public misinformation moving forward. Whether traditional deterrence methods will be effective in regulating technological developments will have implications extending far beyond the next election cycle.