In another case of the law trying to keep pace with evolving technology, legislators are introducing bills to punish those who attempt to create false images that purport to be real. Targeting the rise of automated computer-generated imagery that has become increasingly accessible to the public, on February 14, 2019, California Assemblyman Marc Berman introduced a bill to create a criminal cause of action for making or distributing a “deepfake.” Deepfakes are multimedia, often audiovisual recordings, that seem real but that are generated by computers, often utilizing artificial intelligence-enhanced algorithms.
Developments in deep learning algorithms are fueling the creation of photorealistic images based on simple instructions. For example, NVIDIA’s research team is developing software that processes millions of images of real-world landscapes and then allows a user to paint a fake landscape by using the software to fill in broad swatches of color with realistic images.
But these computer imagery algorithms are not limited to landscape photography; they work with human faces. The people shown below do not exist. The images were created by a computer taught what a human face looks like by looking at 70,000 pictures of people.
Based on the database of actual images of people, the algorithm was then asked to make faces that look realistic. While, in this instance, the computer was asked to create images of new people, similar technology exists to create images of the same person but in different contexts.
One year ago, Buzzfeed demonstrated that the state of the technology had already reached the point where we could no longer judge the veracity and legitimacy of a video based on its realism. In a stunning wake up call, a purported public service announcement by Barack Obama on the dangers we face because of this technology was revealed to be completely computer generated.
The dangers of realistic videos that are actually fake are obvious to anyone who has seen them. Videos are often given an enhanced status of credibility because they have been difficult if not impossible to fake. But this is rapidly changing. While the video of Barack Obama required significant human intervention to create, much has changed in the past year. If realistic videos of world leaders saying anything you want can be generated at a whim, the political consequences are dangerous because they can easily be used in propaganda and disinformation campaigns. This is why technology is simultaneously being developed to detect deepfakes. For example, the Department of Defense is developing tools to forensically analyze media to determine if videos are real or not.
But sometimes, detection is not enough, and some efforts have been made to criminalize content made to defame a person. California Assembly Bill 602 proposes the creation of two new crimes: (i) one for creation of a deceptive recording that is likely to deceive, defame, slander or embarrass the subject of the recording, and (ii) one for the willful distribution of the recording. Deceptive recordings are defined as an audio or video recording created or altered in such a manner it falsely appears, to a reasonable observer, to be an authentic recording of a person’s actual speech or actions. (This would exclude satires and parodies.)
The key elements of the proposed bill turn on the realism of the recording being so good it creates the illusion of authenticity, and for the recording to be likely to deceive, defame, slander or embarrass the subject of the recording.
The bill was modeled on similar statutes enacted to criminalize creating and distributing recordings whose purpose is to embarrass their subjects when the recording involves intimate parts of the subject, but this new crime would be novel because it recognizes the need to punish the creation and distribution of recordings for acts that did not actually happen but appear so realistic that a reasonable person would think they happened after viewing the video (again, if the video proved embarrassing, defaming, slanderous, etc., to the subject).
The law seems crafted to narrowly protect against situations where a person may face embarrassment. For example, the landscape picture, the images of fake people, and the public service announcement would not seem to be prohibited by the proposed bill—the landscape image involves no person, none of the people in the collage of faces are real, and the public service announcement is not embarrassing—however, it takes little effort to imagine situations where images of a false landscape or of false people in certain contexts could fuel disinformation.
This bill is still under consideration by the California Legislature and while its exact fate is uncertain, what is clear is that the public—and the companies whose business models depend on trust from that public—must be increasingly vigilant of deepfakes and critical of videos even if they seem authentic.