What You Need to Know If You’re Using AI-Generated Voices for Your Company

voice-cloning-1742823752-300x178Global music superstar Taylor Swift began her music career in Nashville, so we thought it fitting that on July 1, with the end of the Eras Tour in sight, the Ensuring Likeness Voice and Image Security (ELVIS) Act went into effect in Tennessee. This marks the latest front in the effort to navigate the interplay between the capability of generative AI and the Right of Publicity for music and voice artists alike.

While the Right of Publicity is most commonly associated to a person’s Name, Image and Likeness (NIL) rights in the context of NCAA athletics, one of the primary concerns for musicians and music companies alike is an artist’s voice being cloned. Since there is no federal right of publicity, courts must rely on a patchwork of state laws to accurately assess what (if any) liability can be incurred by using AI to recreate a human voice.

AI technology’s ability to accurately clone a singer’s voice was memorably demonstrated in April 2023 when the song “Heart On My Sleeve” was anonymously posted on YouTube and later shared across other social media platforms including TikTok where it garnered over 8.5 million views in a single weekend. The original video has since been removed, but the accuracy and speed with which the AI-generated results were publicly spread astounded many. Up until that time, much of the conversation surrounding AI and music had focused on copyright. The song’s viral success represented a wakeup call for both the music and legal industry regarding AI-generated likeness in the form of voice cloning.

Tennessee responded quickly to the song’s popularity with the ELVIS Act, which allows an individual to protect their voice against AI-induced infringement as a bona fide intellectual property right. But though approximately 39 states already have or plan to propose NIL legislation, only Tennessee and California have included specific protections for the voices of musicians, with Tennessee providing the only protections specifically safeguarding against AI. As of January 2024, the federal government is working on creating nationwide voice and likeness protections under the No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act (No AI FRAUD Act).

Voice-cloning and the accompanying right of publicity concerns extend beyond musicians and music rights organizations. The ELVIS Act bars the use of AI to mimic a person’s voice without their consent, which extends legal protections to other celebrities as well as voice actors, podcasters and content creators. Many audio-focused tech startups and companies looking to integrate audio into their advertising may soon have to evaluate the degree to which their content infringes on an artist’s rights, and whether an infringed-upon artist could in fact have a viable claim. If a music company looking to capitalize on AI’s music-making ability uses a musician’s voice in a training dataset, for example, the company should be aware of and consider the risk of plaintiffs bringing right of publicity claims. Although a plaintiff’s rights would vary by state, the impending federal legislation would likely set the baseline standard of grounds for legal action.

If, for example, a musical artist brought a false light or defamation claim based on use of a improperly cloned voice, there would be reasonable grounds to support the claim: 1) that an altered musical vocalization featuring a digital replica of the plaintiff conveys a falsehood (i.e., that the plaintiff actually did or said what is depicted in the video) that is intended to be believed by an audience, and 2) that use of such technology amounts to a reckless disregard for the truth if not placed in the proper context (e.g., with proper disclaimers that the original video has been altered or features a digital replica). Considering how U.S. legislation is evolving to support a musician’s cause of action, companies and individuals that may use AI-cloned voices should take heed and consider additional safeguards, such as explicit disclaimers regarding digital replicas, that may be necessary when using recorded music in AI dataset training.

Although the developments regarding generative AI and the Right of Publicity are quickly evolving, the underpinning legal questions are not new. Whether it be deepfakes and memes, copyright and photography, or the more recent questions regarding NIL rights, any company whose products, services or promotional efforts rely on content derived—directly or indirectly—from the work of others needs to have robust risk mitigation strategies in place.


The Brisk Evolution of Name, Image and Likeness (NIL) Rights

The Internet Stole My Face: New Advances in Technology Could Make Everyone a Digital Video Puppet

Privacy, Publicity and Copyright: The Risks of Using Candid Photography in Your Business