Articles Posted in Artificial Intelligence

Posted

On Tuesday, May 2, 2023, the U.S. Copyright Office (USCO) held the second of four sessions on the copyright implications of generative artificial intelligence (GAI), titled “Artificial Intelligence and Copyright – Visual Arts.”

The session focused on GAI issues relevant to visual works, and featured two panels with various stakeholders that brought a range of perspectives to the discussion. These panelists included representatives from GAI platform companies, graphic design software companies, think tanks, policy organizations, and law firms, as well as artists concerned by the impact of GAI.

Continue Reading →

Posted

counterfactuals-1403599183-300x200Counterfactuals, the buzzy term being touted in the latest AI news, assert some impressive promises for the tech world. But the general idea is nothing new. Counterfactual thinking describes the human drive to conjure up alternative outcomes based on different choices that might be made along the way—essentially it’s a way of looking at cause and effect. Counterfactual philosophy dates back to Aristotle, and along the way it’s been debated by historians and even written about by Winston Churchill in his 1931 essay, “If Lee Had NOT Won the Battle of Gettysburg.” In more recent years, scientists have found that it’s possible to translate such counterfactual theories into complex math equations and plug them into AI models. These programs aim to use causal reasoning to pore over mountains of data and form predictions (and explain their logic) on things like drug performance, disease assessment, financial simulations and more.

Continue Reading →

Posted

GettyImages-1204240415-300x216New and emerging technologies have always carried a host of potential risks to accompany their oft-blinding potential. Just as dependably, those risks have often been ignored, glossed over or just missed as public enthusiasm waxes and companies race to bring a product to market first and most effectively. Automobiles promised to get people (and products) from one place to another at life-changing speeds, but also posed a danger to life and limb while imposing a new burden on existing infrastructure. Even as technology leaps have transitioned from appliances and aircraft to computers, connectivity and large language models (LLMs), new and untested technologies continue to outpace the government and the public’s ability to moderate them. But while one can debate what constitutes an acceptable gap between the practical and ideal when it comes to regulating, mandating and evaluating the pros and cons of new technology, societies tend to generate their own methods of informing the public and attempting to rein in the more harmful aspects of the latest thing.

Continue Reading →

Posted

As the emergence of generative AI brings new market opportunities to China, leading China-based tech giants have released or plan to release their own self-developed generative AI services. On April 11, 2023, China’s main cybersecurity and data privacy regulator, Cyberspace Administration of China (CAC) issued its Administrative Measures on Generative Artificial Intelligence Service draft for public comments. (The public comment period will end on May 10, 2023.)

In “China Issues Proposed Regulations on Generative AI,” colleagues Jenny (Jia) ShengChunbin Xu and Wenjun Cai break down the proposed rules, which apply to all generative AI services open to users in mainland China and are focused on cybersecurity and data privacy risks.

Posted

Artificial intelligence is rapidly evolving, and large language models (LLMs) like ChatGPT are one of the more exciting examples. Their generative capabilities have implications for our patent system, some of which are underappreciated and nonintuitive.

Under U.S. patent law, an inventor may not obtain a patent if the claimed invention would have been obvious to an artisan of ordinary skill, in view of the prior art. (See 35 U.S.C. § 103.)

Continue Reading →

Posted

NewsofNoteMain-300x250

In today’s News of Note, generative AI continues to draw criticism and even a ban, but that doesn’t stop developers from pushing forward with everything from music prediction and mind-reading—to talking with crabs. Plus, we look at quantum computing in health care, a new report on the impact of deep-sea rare earths mining, and so much more.

Continue Reading →

Posted

Over on Pillsbury’s SourcingSpeak blog, colleagues  and  provide an in-depth exploration of the many concerns and considerations in play for organizations seeking to integrate AI systems into their own operations.

Continue Reading →

Posted

GettyImages-1432104539-e1678373650565-300x252On February 21, 2023, the Copyright Office eclipsed its prior decisions in the area of AI authorship when it partially cancelled Kristina Kashtanova’s registration for a comic book titled Zarya of the Dawn. In doing so, the Office found that the AI program Kashtanova used—Midjourney—was primarily responsible for the visual output that the Office chose to exclude from Kashtanova’s registration. (Midjourney is an AI program that creates images from textual descriptions, much like OpenAI’s DALL-E.) The decision not only highlights tension between the human authorship requirements of copyright law and the means of expression that authors can use, but it also raises the question: Can AI-generated works ever be protected under U.S. copyright law?

Continue Reading →

Posted

NewsofNote-300x250Today in News of Note, an innovation may pave the way for real-world quantum computing, plus we look at the latest generative AI debuts, how the world is addressing safety concerns with AI in the military, robotic chefs and much more.

Continue Reading →

Posted

Last week, Twitch banned an AI-generated production based on Seinfeld called “Nothing, Forever” from the platform for 14 days after a character named Larry Feinberg—a Jerry Seinfeld clone—made transphobic statements during his standup routine. The show’s creators blamed OpenAI’s Curie model, an older, more rudimentary version, for generating the offensive remarks. More specifically, Curie’s baked-in algorithmic bias caused it to generate the hateful comments. While jarring, the incident is by no means surprising to anyone familiar with the issue of algorithmic bias in the development and use of AI systems.

Continue Reading →