A federal judge has blocked a Montana law banning the popular video sharing app TikTok, finding “little doubt” that it was “more interested in targeting China’s extensible role in TikTok than with protecting Montana consumers.” The ruling, likely to be celebrated by consumers and free speech advocates alike, comes at a time when federal and state governments are grappling with how to regulate social media companies.
A U.S. District Court in Illinois dismissed a case by the Chicago-based law firm MillerKing LLC against the so-called “robot lawyer” DoNotPay, Inc. (DNP). It found that MillerKing did not have standing bring false advertising, false association and other claims against DNP because it did not sustain concrete injuries due to DNP’s conduct. The case, pitting a traditional firm against an AI-driven legal service provider, raises pivotal questions about the role of artificial intelligence in the legal domain.
The Competition and Markets Authority (CMA), the UK’s competition regulator, announced this month that it plans on publishing an update in March 2024 to its initial report on AI foundation models (published in September 2023). The update will be the result of the CMA launching a “significant programme of engagement” in the UK, the United States and elsewhere to seek views on the initial report and proposed competition and consumer protection principles.
For users of generative AI programs, a growing concern has been with potential liability resulting from infringement claims by copyright owners whose materials were used to train the AI. At its annual DevDay conference in early November, OpenAI became the latest major company to address this by offering to indemnify certain users of its ChatGPT chatbot.
The United Kingdom hosted an Artificial Intelligence (AI) Safety Summit on November 1 – 2 at Bletchley Park with the purpose of bringing together those leading the AI charge, including international governments, AI companies, civil society groups and research experts to consider the risks of AI and to discuss AI risk mitigation through internationally coordinated action.
On November 1, 2023, the U.S. Supreme Court engaged in a thought-provoking deliberation concerning the intersection of the First Amendment to the U.S. Constitution and U.S. trademark law, Vidal v. Elster, Supr. Ct. Case No. 22-704. The case considers whether the refusal of the U.S. Patent and Trademark Office (USPTO) to register the phrase TRUMP TOO SMALL as a trademark violates the free speech right under the First Amendment. The case highlights the ongoing debate surrounding political expression and government regulation.
On October 18, 2023, the U.S. Court of Appeals for the Federal Circuit reversed and remanded a U.S. Trademark Trial and Appeal Board (TTAB) decision in Great Concepts, LLC v. Chutter, Inc., in which the Board canceled Great Concepts’ trademark registration based on a fraudulent Section 15 declaration of incontestability. In doing so, the Federal Circuit held that the TTAB lacks authority to cancel a registration based on a fraudulent incontestability declaration, closing the door to cancellation claims premised on fraudulent statements on which the USPTO did not rely either to issue or maintain a registration.
In a move that underscores the escalating tension between the music industry and artificial intelligence (AI), many of the world’s largest music publishers have filed a joint lawsuit against AI startup Anthropic over song lyrics. The suit alleges that Anthropic’s chatbot, Claude, scrapes lyrics from the publishers’ catalogs without permission and thereby infringes on copyrighted material. It serves as yet another example of generative AI companies facing increasing pressure over their use of intellectual property to develop the groundbreaking, generative AI technology.
In 2021, the Department of Homeland Security started a process of adopting regulations for mobile driver’s licenses. The Transportation Security Administration (TSA) has since begun allowing mobile driver’s licenses as identification at airports, and several states jumped on the bandwagon, offering mobile driver’s licenses through state-sponsored apps or via Apple and Google Wallet. Now, the TSA has proposed new regulations that would waive REAL ID requirements for state-issued mobile driver’s licenses, but privacy advocates including the American Civil Liberties Union (ACLU) and the Electronic Frontier Foundation (EFF) warn this move may put consumers’ personal information at risk.
In speaking at this past week’s #shifthappens Conference, I had the pleasure of discussing both the potential and pitfalls posed by generative AI with fellow panelists David Pryor Jr., Alex Tuzhilin, Julia Glidden and Gerry Petrella. Our wide-ranging discussion covered how regulators can address the privacy, security and transparency concerns that underlie this transformative technology. Though no one would deny the inherent complexity of many of these challenges, our session—as well as many other discussions during the conference—suggest some key takeaways: