Regulators at the state and federal level are increasing their scrutiny of businesses’ use of artificial intelligence (AI). For example, recently, representatives from the Office of the Comptroller of the Currency and the New York State Department of Financial Services discussed the need for developing additional AI guidance.
It might be a little meta to have a blog post about a blog post, but there’s no way around it when the FTC publishes a post to its blog warning companies that use AI to “[h]old yourself accountable—or be ready for the FTC to do it for you.” When last we wrote about facial recognition AI, we discussed how the courts are being used to push for AI accountability and how Twitter has taken the initiative to understand the impacts of its machine learning algorithms through its Responsible ML program. Now we have the FTC weighing in with recommendations on how companies can use AI in a truthful, fair and equitable manner—along with a not-so-subtle reminder that the FTC has tools at its disposal to combat unfair or biased AI and is willing to step in and do so should companies fail to take responsibility.
As part of our on-going coverage on the use and potential abuse of facial recognition AI, we bring you news out of Michigan, where the University of Michigan’s Law School, the American Civil Liberties Union (ACLU) and the ACLU of Michigan have filed a lawsuit against the Detroit Police Department (DPD), the DPD Police Chief, and a DPD investigator on behalf of Robert Williams—a Michigan resident who was wrongfully arrested based on “shoddy” police work that relied upon facial recognition technology to identify a shoplifter.
Interactive online platforms have become an integral part of our daily lives. While user-generated content, free from traditional editorial constraints, has spurred vibrant online communications, improved business processes and expanded access to information, it has also raised complex questions regarding how to moderate harmful online content. As the volume of user-generated content continues to grow, it has become increasingly difficult for internet and social media companies to keep pace with the moderation needs of the information posted on their platforms. Content moderation measures supported by artificial intelligence (AI) have emerged as important tools to address this challenge.
While we’ve devoted ample time to discussing areas of potential concern regarding the application of algorithms—and algorithm bias in particular—it’s also a good time to remember algorithmic technology is poised to make our lives better, often in ways we’ll never know about.
The sweeping use of facial recognition software across public and private sectors has raised alarm bells in communities of color, for good reason. The data that feed the software, the photographic technology in the software, the application of the software—all these factors work together against darker-skinned people.
As research continues to prove that AI is not an impartial arbiter of who’s who (or who’s what), various mechanisms are being devised to mitigate the collateral damage from facial recognition software.
We’ve previously touched on some of the issues caused by AI bias. We’ve described how facial recognition technology may result in discriminatory outcomes, and more recently, we’ve addressed a parade of “algorithmic horror shows” such as flash stock market crashes, failed photographic technology, and egregious law enforcement errors. As uses of AI technology burgeons, so, too, do the risks. In this post, we explore ways to allocate the risks caused by AI bias in contracts between developers/licensors of the products and the customers purchasing the AI systems. Drafting a contract that incentivizes the AI provider to implement non-biased techniques may be a means to limit legal liability for AI bias.
Say what you want about the digital ad you received today for the shoes you bought yesterday, but research shows that algorithms are a powerful tool in online retail and marketing. By some estimates, 80 percent of Netflix viewing hours and 33 percent of Amazon purchases are prompted by automated recommendations based on the consumer’s viewing or buying history.
But algorithms may be even more powerful where they’re less visible—which is to say, everywhere else. Between 2015 and 2019, the use of artificial intelligence technology by businesses grew by more than 270 percent, and that growth certainly isn’t limited to the private sector.