Posted

Generative AI Faces Familiar Challenges from “Baked In” Algorithmic Bias

Last week, Twitch banned an AI-generated production based on Seinfeld called “Nothing, Forever” from the platform for 14 days after a character named Larry Feinberg—a Jerry Seinfeld clone—made transphobic statements during his standup routine. The show’s creators blamed OpenAI’s Curie model, an older, more rudimentary version, for generating the offensive remarks. More specifically, Curie’s baked-in algorithmic bias caused it to generate the hateful comments. While jarring, the incident is by no means surprising to anyone familiar with the issue of algorithmic bias in the development and use of AI systems.

Algorithmic bias occurs when AI is trained on data that introduces it to human biases, whether through inclusion or exclusion, which then causes those AI systems to deliver skewed results that can perpetuate harmful ideas and behaviors, such as racial prejudices and gender and age discrimination. In addition to the data that machines are trained on, the researchers and engineers responsible for building the software may also introduce algorithmic biases if the team is more homogenous than diverse. According to the Head of Data Science and AI at Australian Computer Society Steve Nouri, writing for Forbes, “This can create a lack of empathy for the people who face problems of discrimination, leading to an unconscious introduction of bias in these algorithmic-savvy AI systems.”

It’s an issue we’ve discussed in general and in detail, whether in relation to the black box conundrum, biased datasets in facial recognition software, social media moderation, or the developing compliance minefield resulting from increased state and federal scrutiny of business relying upon AI. But it’s also an issue which every company relying on—or planning to rely on—AI toolsets should continue to discuss, vet and otherwise continue to question, scrutinize and adjust in terms of policies and performance.

After all, even as the latest batch in “tools you can use”—ChatGPT’s encyclopedic assistant and Dall-E’s language-to-art generator are just a few examples—have provoked enthusiasm (and concern), virtually none of the accompanying dangers have truly been grappled with, let alone “tamed,” even as the need to deliver unbiased results has become an imperative for social justice advocates. As is often the case with rapidly developing technologies, usage can far outpace the creation of safeguards to ensure new tools are deployed responsibly.

Nothing, Forever’s transphobic standup routine may seem far removed for companies using AI, but it should serve as the latest reminder that AI safety, always a challenge, is a responsibility for any company using it. Always, forever.


RELATED ARTICLES

From Chatter to ChatGPT: A Once Simple Tool Has Become an Industry Mainstay

The Black Box Conundrum: Go Weak or Stay Strong?

“Dirty by Nature” Data Sets: Facial Recognition Technology Raises Concerns