The 2020 election will be unprecedented in many respects. More people will be voting by mail, and there will likely be more democratic participation online than ever before. Internet platforms and communication services will become more influential forums as people are restricted from in-person conventions and debates. But even before the pandemic pushed these operations online, some lawmakers were already seeking to monitor misinformation in political discourse on the internet, specifically in the context of deepfakes. Deepfakes are manipulated photo, video or audio clips generated by computers, often with the assistance of artificial intelligence algorithms. They can make someone appear to say or do something that never actually happened or create realistic images of people who do not exist.
In another case of the law trying to keep pace with evolving technology, legislators are introducing bills to punish those who attempt to create false images that purport to be real. Targeting the rise of automated computer-generated imagery that has become increasingly accessible to the public, on February 14, 2019, California Assemblyman Marc Berman introduced a bill to create a criminal cause of action for making or distributing a “deepfake.” Deepfakes are multimedia, often audiovisual recordings, that seem real but that are generated by computers, often utilizing artificial intelligence-enhanced algorithms.
Recent developments in deep learning artificial intelligence have enabled almost anyone to superimpose facial features—including an entirely different face—into a preexisting video with relatively minimal effort. Until very recently, editing facial features in a video has been incredibly difficult. Even movie studios with access to professional video editing tools have struggled with the task as recently as in 2017, when actor Henry Cavill—portraying everyone’s favorite son of Krypton—sported a mustache he was contractually unable to remove during reshoots, leading to a widely criticized post-production digital shave. Because of the inherent difficulty in convincingly manipulating video to appear realistic, the public has widely been trusting of video’s authenticity while viewing still photos more skeptically. With recent developments in artificial intelligence, this thinking must now change.