It might be a little meta to have a blog post about a blog post, but there’s no way around it when the FTC publishes a post to its blog warning companies that use AI to “[h]old yourself accountable—or be ready for the FTC to do it for you.” When last we wrote about facial recognition AI, we discussed how the courts are being used to push for AI accountability and how Twitter has taken the initiative to understand the impacts of its machine learning algorithms through its Responsible ML program. Now we have the FTC weighing in with recommendations on how companies can use AI in a truthful, fair and equitable manner—along with a not-so-subtle reminder that the FTC has tools at its disposal to combat unfair or biased AI and is willing to step in and do so should companies fail to take responsibility.
Articles Posted in Artificial Intelligence
With Facial Recognition, Responsible Users Are the Key to Effective AI
As part of our on-going coverage on the use and potential abuse of facial recognition AI, we bring you news out of Michigan, where the University of Michigan’s Law School, the American Civil Liberties Union (ACLU) and the ACLU of Michigan have filed a lawsuit against the Detroit Police Department (DPD), the DPD Police Chief, and a DPD investigator on behalf of Robert Williams—a Michigan resident who was wrongfully arrested based on “shoddy” police work that relied upon facial recognition technology to identify a shoplifter.
Everything in Moderation: Artificial Intelligence and Social Media Content Review
Interactive online platforms have become an integral part of our daily lives. While user-generated content, free from traditional editorial constraints, has spurred vibrant online communications, improved business processes and expanded access to information, it has also raised complex questions regarding how to moderate harmful online content. As the volume of user-generated content continues to grow, it has become increasingly difficult for internet and social media companies to keep pace with the moderation needs of the information posted on their platforms. Content moderation measures supported by artificial intelligence (AI) have emerged as important tools to address this challenge.
Driving Questions, Human Error and Growth Industries for Machine Learning
One of the biggest obstacles self-driving cars have to get around is the one between our ears. Even as these vehicles are hitting the streets in pilot projects, three out of four Americans aren’t comfortable with the idea of their widespread use.
“Dirty by Nature” Data Sets: Facial Recognition Technology Raises Concerns
The sweeping use of facial recognition software across public and private sectors has raised alarm bells in communities of color, for good reason. The data that feed the software, the photographic technology in the software, the application of the software—all these factors work together against darker-skinned people.
About Face: Algorithm Bias and Damage Control
How Can Customers Address AI Bias in Contracts with AI Providers?
We’ve previously touched on some of the issues caused by AI bias. We’ve described how facial recognition technology may result in discriminatory outcomes, and more recently, we’ve addressed a parade of “algorithmic horror shows” such as flash stock market crashes, failed photographic technology, and egregious law enforcement errors. As uses of AI technology burgeons, so, too, do the risks. In this post, we explore ways to allocate the risks caused by AI bias in contracts between developers/licensors of the products and the customers purchasing the AI systems. Drafting a contract that incentivizes the AI provider to implement non-biased techniques may be a means to limit legal liability for AI bias.
Retooling AI: Algorithm Bias and the Struggle to Do No Harm
Say what you want about the digital ad you received today for the shoes you bought yesterday, but research shows that algorithms are a powerful tool in online retail and marketing. By some estimates, 80 percent of Netflix viewing hours and 33 percent of Amazon purchases are prompted by automated recommendations based on the consumer’s viewing or buying history.
But algorithms may be even more powerful where they’re less visible—which is to say, everywhere else. Between 2015 and 2019, the use of artificial intelligence technology by businesses grew by more than 270 percent, and that growth certainly isn’t limited to the private sector.
Facial Recognition, Racial Recognition and the Clear and Present Issues with AI Bias
As we’ve discussed in this space previously, the effect of AI bias, especially in connection with facial recognition, is a growing problem. The most recent example—users discovered that the Twitter photo algorithm that automatically crops photos seemed to consistently crop out black faces and center white ones. It began when a user noticed that, when using a virtual background, Zoom kept cropping out his black coworker’s head. When he tweeted about this phenomenon, he then noticed that Twitter automatically cropped his side-by-side photo of him and his co-worker such that the co-worker was out of the frame and his (white) face was centered. After he posted, other users began performing their own tests, generally finding the same results.
Digitalized Discrimination: COVID-19 and the Impact of Bias in Artificial Intelligence
As the world grapples with the impacts of the COVID-19 pandemic, we have become increasingly reliant on artificial intelligence (AI) technology. Experts have used AI to test potential treatments, diagnose individuals, and analyze other public health impacts. Even before the pandemic, businesses were increasingly turning to AI to improve efficiency and overall profit. Between 2015 and 2019, the adoption of AI technology by businesses grew more than 270 percent.
Internet & Social Media Law Blog



