A U.S. District Court in Illinois dismissed a case by the Chicago-based law firm MillerKing LLC against the so-called “robot lawyer” DoNotPay, Inc. (DNP). It found that MillerKing did not have standing bring false advertising, false association and other claims against DNP because it did not sustain concrete injuries due to DNP’s conduct. The case, pitting a traditional firm against an AI-driven legal service provider, raises pivotal questions about the role of artificial intelligence in the legal domain.
Articles Posted in Artificial Intelligence
The Impact of AI Foundation Models on Competition, Consumers and Regulation: A View from the UK’s CMA
The Competition and Markets Authority (CMA), the UK’s competition regulator, announced this month that it plans on publishing an update in March 2024 to its initial report on AI foundation models (published in September 2023). The update will be the result of the CMA launching a “significant programme of engagement” in the UK, the United States and elsewhere to seek views on the initial report and proposed competition and consumer protection principles.
OpenAI Joins Other Generative AI Companies in Offering Indemnity for Users Against (Some) Third-Party Infringement Claims
For users of generative AI programs, a growing concern has been with potential liability resulting from infringement claims by copyright owners whose materials were used to train the AI. At its annual DevDay conference in early November, OpenAI became the latest major company to address this by offering to indemnify certain users of its ChatGPT chatbot.
Key Takeaways from the UK’s AI Summit: The Bletchley Declaration
The United Kingdom hosted an Artificial Intelligence (AI) Safety Summit on November 1 – 2 at Bletchley Park with the purpose of bringing together those leading the AI charge, including international governments, AI companies, civil society groups and research experts to consider the risks of AI and to discuss AI risk mitigation through internationally coordinated action.
New Lawsuit Challenges AI Scraping of Song Lyrics
In a move that underscores the escalating tension between the music industry and artificial intelligence (AI), many of the world’s largest music publishers have filed a joint lawsuit against AI startup Anthropic over song lyrics. The suit alleges that Anthropic’s chatbot, Claude, scrapes lyrics from the publishers’ catalogs without permission and thereby infringes on copyrighted material. It serves as yet another example of generative AI companies facing increasing pressure over their use of intellectual property to develop the groundbreaking, generative AI technology.
5 Important Takeaways from the 2023 #shifthappens Conference
In speaking at this past week’s #shifthappens Conference, I had the pleasure of discussing both the potential and pitfalls posed by generative AI with fellow panelists David Pryor Jr., Alex Tuzhilin, Julia Glidden and Gerry Petrella. Our wide-ranging discussion covered how regulators can address the privacy, security and transparency concerns that underlie this transformative technology. Though no one would deny the inherent complexity of many of these challenges, our session—as well as many other discussions during the conference—suggest some key takeaways:
“Copyright Implications of Generative AI” and the 2023 AIPLA Annual Meeting
On October 20, at 9:15 a.m., colleague and frequent contributor Sam Eichner will present on “Copyright Implications of Generative AI” during the Copyright and Trademark track at the 2023 AIPLA Annual Meeting.
The event will host over 1,000 IP practitioners and leaders and cover a wide range of IP-related topics, including the ethical implications of AI in research and development, trademarks and the First Amendment, Standard Essential Patents licensing, the PTAB, Section 101, and transformative fair use.
For more information, please see the event page.
Stand-Alone AI-Generated Content Is Not Copyrightable
On August 18, 2023, the U.S. District Court for the District of Columbia denied Dr. Stephen Thaler’s motion and granted the U.S. Copyright Office’s cross motion to dismiss Thaler’s complaint. The facts of Thaler’s struggle to overcome the Copyright Office’s Human Authorship Requirement and register copyright in an AI-generated work are recounted here.
News of Note for the Internet-Minded (8/30/23) – Ransomware, Quantum Attacks and a New LLM
In this week’s News of Note, ransomware attacks break records and wipe data for a majority of a cloud provider’s customers, while one RaaS case delivers useful details about cybercriminal techniques and tactics. Also, the development of algorithms to protect against quantum computers continues, facial recognition software nabs an elderly criminal, and more.
Who (If Anyone) Owns AI-Generated Content?
Whose content is it anyway? This is one of the questions that many hope will be answered by a federal court in Thaler v. Perlmutter. In June 2022, computer scientist Dr. Stephen Thaler sued the U.S. Copyright Office to redress the denial of his application to register copyright in his AI system’s visual output under the Office’s “Human Authorship Requirement.” A few months later, the Office again enforced this requirement, reversing its decision to register Kristina Kashtanova’s illustrated comic book, Zarya of the Dawn, after it became clear that AI was used to generate those images. While Thaler now asks a U.S. federal court to determine whether an AI system can author copyrightable work, and to effectively overrule the Office’s “Human Authorship Requirement,” it remains to be seen whether the court will tackle those broad issues, or instead narrowly focus on whether the Office had reasonable grounds to deny Thaler’s application.