In this week’s News of Note, ransomware attacks break records and wipe data for a majority of a cloud provider’s customers, while one RaaS case delivers useful details about cybercriminal techniques and tactics. Also, the development of algorithms to protect against quantum computers continues, facial recognition software nabs an elderly criminal, and more.
Whose content is it anyway? This is one of the questions that many hope will be answered by a federal court in Thaler v. Perlmutter. In June 2022, computer scientist Dr. Stephen Thaler sued the U.S. Copyright Office to redress the denial of his application to register copyright in his AI system’s visual output under the Office’s “Human Authorship Requirement.” A few months later, the Office again enforced this requirement, reversing its decision to register Kristina Kashtanova’s illustrated comic book, Zarya of the Dawn, after it became clear that AI was used to generate those images. While Thaler now asks a U.S. federal court to determine whether an AI system can author copyrightable work, and to effectively overrule the Office’s “Human Authorship Requirement,” it remains to be seen whether the court will tackle those broad issues, or instead narrowly focus on whether the Office had reasonable grounds to deny Thaler’s application.
On June 13, 2023, the American Society of Composers, Authors and Publishers (ASCAP) announced a series of initiatives to guide and protect creators as artificial intelligence (AI) continues to develop and impact the music industry. ASCAP has a strong history of supporting artists, technological innovation and music royalties. Following in suit, ASCAP’s AI initiatives consist of a series of events and principles that seek to promote AI education, innovation and implementation in the music industry, and ensure that artists are justly compensated. The initiatives include the 2023 ASCAP Lab/NYC Media Lab Music and AI Challenge, the ASCAP Experience, the ASCAP AI Symposium, and ASCAPs AI Principles and Advocacy.
The European Union (EU) has made steady progress in shaping its proposed AI law, known as the “AI Act.” With the European Parliament approving its preferred version, the AI Act has now entered the final stage of the legislative process (a three-way negotiation, known as “trilogue”). The aim is to agree to a final version of the law by the end of 2023. The EU’s objective is to ensure that AI developed and utilized within Europe aligns with the region’s values and rights, including ensuring human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being.
On May 31, 2023, the U.S. Copyright Office (USCO) held the final session of its Spring 2023 AI Listening Session. This session was held across two panels and discussed the copyright implications of AI-generated content (AIGC) in music and sound recordings. The panelists consisted of various stakeholders in the music industry such as founders of AIGC music companies, songwriters, professors, and counsel to music and streaming companies.
The use of generative AI tools, like ChatGPT, are becoming increasingly popular in the workplace. Generative AI tools include artificial intelligence chatbots powered by “large language models” (LLMs) that learn from (and share) a vast amount of accumulated text and interactions (usually snapshots of the entire internet). These tools are capable of interacting with users in a conversational and iterative way with a human-like personality, to perform a wide range of tasks, such as generating text, analyzing and solving problems, language translation, summarizing complex content or even generating code for software applications. For example, in a matter of seconds they can provide a draft marketing campaign, generate corresponding website code, or write customer-facing emails.
On Tuesday, May 2, 2023, the U.S. Copyright Office (USCO) held the second of four sessions on the copyright implications of generative artificial intelligence (GAI), titled “Artificial Intelligence and Copyright – Visual Arts.”
The session focused on GAI issues relevant to visual works, and featured two panels with various stakeholders that brought a range of perspectives to the discussion. These panelists included representatives from GAI platform companies, graphic design software companies, think tanks, policy organizations, and law firms, as well as artists concerned by the impact of GAI.
Counterfactuals, the buzzy term being touted in the latest AI news, assert some impressive promises for the tech world. But the general idea is nothing new. Counterfactual thinking describes the human drive to conjure up alternative outcomes based on different choices that might be made along the way—essentially it’s a way of looking at cause and effect. Counterfactual philosophy dates back to Aristotle, and along the way it’s been debated by historians and even written about by Winston Churchill in his 1931 essay, “If Lee Had NOT Won the Battle of Gettysburg.” In more recent years, scientists have found that it’s possible to translate such counterfactual theories into complex math equations and plug them into AI models. These programs aim to use causal reasoning to pore over mountains of data and form predictions (and explain their logic) on things like drug performance, disease assessment, financial simulations and more.
New and emerging technologies have always carried a host of potential risks to accompany their oft-blinding potential. Just as dependably, those risks have often been ignored, glossed over or just missed as public enthusiasm waxes and companies race to bring a product to market first and most effectively. Automobiles promised to get people (and products) from one place to another at life-changing speeds, but also posed a danger to life and limb while imposing a new burden on existing infrastructure. Even as technology leaps have transitioned from appliances and aircraft to computers, connectivity and large language models (LLMs), new and untested technologies continue to outpace the government and the public’s ability to moderate them. But while one can debate what constitutes an acceptable gap between the practical and ideal when it comes to regulating, mandating and evaluating the pros and cons of new technology, societies tend to generate their own methods of informing the public and attempting to rein in the more harmful aspects of the latest thing.
As the emergence of generative AI brings new market opportunities to China, leading China-based tech giants have released or plan to release their own self-developed generative AI services. On April 11, 2023, China’s main cybersecurity and data privacy regulator, Cyberspace Administration of China (CAC) issued its Administrative Measures on Generative Artificial Intelligence Service draft for public comments. (The public comment period will end on May 10, 2023.)
In “China Issues Proposed Regulations on Generative AI,” colleagues Jenny (Jia) Sheng, Chunbin Xu and Wenjun Cai break down the proposed rules, which apply to all generative AI services open to users in mainland China and are focused on cybersecurity and data privacy risks.