Posted

On June 13, 2023, the American Society of Composers, Authors and Publishers (ASCAP) announced a series of initiatives to guide and protect creators as artificial intelligence (AI) continues to develop and impact the music industry. ASCAP has a strong history of supporting artists, technological innovation and music royalties. Following in suit, ASCAP’s AI initiatives consist of a series of events and principles that seek to promote AI education, innovation and implementation in the music industry, and ensure that artists are justly compensated. The initiatives include the 2023 ASCAP Lab/NYC Media Lab Music and AI Challenge, the ASCAP Experience, the ASCAP AI Symposium, and ASCAPs AI Principles and Advocacy.

Continue Reading →

Posted

On Tuesday, May 2, 2023, the U.S. Copyright Office (USCO) held the second of four sessions on the copyright implications of generative artificial intelligence (GAI), titled “Artificial Intelligence and Copyright – Visual Arts.”

The session focused on GAI issues relevant to visual works, and featured two panels with various stakeholders that brought a range of perspectives to the discussion. These panelists included representatives from GAI platform companies, graphic design software companies, think tanks, policy organizations, and law firms, as well as artists concerned by the impact of GAI.

Continue Reading →

Posted

GettyImages-1204240415-300x216New and emerging technologies have always carried a host of potential risks to accompany their oft-blinding potential. Just as dependably, those risks have often been ignored, glossed over or just missed as public enthusiasm waxes and companies race to bring a product to market first and most effectively. Automobiles promised to get people (and products) from one place to another at life-changing speeds, but also posed a danger to life and limb while imposing a new burden on existing infrastructure. Even as technology leaps have transitioned from appliances and aircraft to computers, connectivity and large language models (LLMs), new and untested technologies continue to outpace the government and the public’s ability to moderate them. But while one can debate what constitutes an acceptable gap between the practical and ideal when it comes to regulating, mandating and evaluating the pros and cons of new technology, societies tend to generate their own methods of informing the public and attempting to rein in the more harmful aspects of the latest thing.

Continue Reading →

Posted

As the emergence of generative AI brings new market opportunities to China, leading China-based tech giants have released or plan to release their own self-developed generative AI services. On April 11, 2023, China’s main cybersecurity and data privacy regulator, Cyberspace Administration of China (CAC) issued its Administrative Measures on Generative Artificial Intelligence Service draft for public comments. (The public comment period will end on May 10, 2023.)

In “China Issues Proposed Regulations on Generative AI,” colleagues Jenny (Jia) ShengChunbin Xu and Wenjun Cai break down the proposed rules, which apply to all generative AI services open to users in mainland China and are focused on cybersecurity and data privacy risks.

Posted

GettyImages-1425611812-300x200On April 6, 2023, the U.S. Court of Appeals for the Federal Circuit affirmed Judge Gilstrap’s ruling in SAS Institute, Inc. v. World Programming Limited, which effectively denied copyright protection to SAS Institute’s data analysis software. The decision is likely to have lasting implications for developers that seek to protect software through copyright law.

Continue Reading →

Posted

NewsofNoteMain-300x250

In today’s News of Note, generative AI continues to draw criticism and even a ban, but that doesn’t stop developers from pushing forward with everything from music prediction and mind-reading—to talking with crabs. Plus, we look at quantum computing in health care, a new report on the impact of deep-sea rare earths mining, and so much more.

Continue Reading →

Posted

Over on Pillsbury’s SourcingSpeak blog, colleagues  and  provide an in-depth exploration of the many concerns and considerations in play for organizations seeking to integrate AI systems into their own operations.

Continue Reading →

Posted

GettyImages-1432104539-e1678373650565-300x252On February 21, 2023, the Copyright Office eclipsed its prior decisions in the area of AI authorship when it partially cancelled Kristina Kashtanova’s registration for a comic book titled Zarya of the Dawn. In doing so, the Office found that the AI program Kashtanova used—Midjourney—was primarily responsible for the visual output that the Office chose to exclude from Kashtanova’s registration. (Midjourney is an AI program that creates images from textual descriptions, much like OpenAI’s DALL-E.) The decision not only highlights tension between the human authorship requirements of copyright law and the means of expression that authors can use, but it also raises the question: Can AI-generated works ever be protected under U.S. copyright law?

Continue Reading →

Posted

GettyImages-cloud-security-300x200On February 8, 2023, the U.S. Department of the Treasury released a report citing its “findings on the current state of cloud adoption in the sector, including potential benefits and challenges associated with increased adoption.” Treasury acknowledged that cloud adoption is an “important component” of a financial institution’s overall technology and business strategy, but also warned the industry about the harm a technical breakdown or cyberattack could have on the public given financial institutions’ reliance on a few large cloud service providers. The Treasury also noted that “[t]his report does not impose any new requirements or standards applicable to regulated financial institutions and is not intended to endorse or discourage the use of any specific provider or cloud services more generally.”

Continue Reading →

Posted

Last week, Twitch banned an AI-generated production based on Seinfeld called “Nothing, Forever” from the platform for 14 days after a character named Larry Feinberg—a Jerry Seinfeld clone—made transphobic statements during his standup routine. The show’s creators blamed OpenAI’s Curie model, an older, more rudimentary version, for generating the offensive remarks. More specifically, Curie’s baked-in algorithmic bias caused it to generate the hateful comments. While jarring, the incident is by no means surprising to anyone familiar with the issue of algorithmic bias in the development and use of AI systems.

Continue Reading →