The sweeping use of facial recognition software across public and private sectors has raised alarm bells in communities of color, for good reason. The data that feed the software, the photographic technology in the software, the application of the software—all these factors work together against darker-skinned people.
As research continues to prove that AI is not an impartial arbiter of who’s who (or who’s what), various mechanisms are being devised to mitigate the collateral damage from facial recognition software.
We’ve previously touched on some of the issues caused by AI bias. We’ve described how facial recognition technology may result in discriminatory outcomes, and more recently, we’ve addressed a parade of “algorithmic horror shows” such as flash stock market crashes, failed photographic technology, and egregious law enforcement errors. As uses of AI technology burgeons, so, too, do the risks. In this post, we explore ways to allocate the risks caused by AI bias in contracts between developers/licensors of the products and the customers purchasing the AI systems. Drafting a contract that incentivizes the AI provider to implement non-biased techniques may be a means to limit legal liability for AI bias.
Say what you want about the digital ad you received today for the shoes you bought yesterday, but research shows that algorithms are a powerful tool in online retail and marketing. By some estimates, 80 percent of Netflix viewing hours and 33 percent of Amazon purchases are prompted by automated recommendations based on the consumer’s viewing or buying history.
But algorithms may be even more powerful where they’re less visible—which is to say, everywhere else. Between 2015 and 2019, the use of artificial intelligence technology by businesses grew by more than 270 percent, and that growth certainly isn’t limited to the private sector.
As we’ve discussed in this space previously, the effect of AI bias, especially in connection with facial recognition, is a growing problem. The most recent example—users discovered that the Twitter photo algorithm that automatically crops photos seemed to consistently crop out black faces and center white ones. It began when a user noticed that, when using a virtual background, Zoom kept cropping out his black coworker’s head. When he tweeted about this phenomenon, he then noticed that Twitter automatically cropped his side-by-side photo of him and his co-worker such that the co-worker was out of the frame and his (white) face was centered. After he posted, other users began performing their own tests, generally finding the same results.
As the world grapples with the impacts of the COVID-19 pandemic, we have become increasingly reliant on artificial intelligence (AI) technology. Experts have used AI to test potential treatments, diagnose individuals, and analyze other public health impacts. Even before the pandemic, businesses were increasingly turning to AI to improve efficiency and overall profit. Between 2015 and 2019, the adoption of AI technology by businesses grew more than 270 percent.
The 2020 election will be unprecedented in many respects. More people will be voting by mail, and there will likely be more democratic participation online than ever before. Internet platforms and communication services will become more influential forums as people are restricted from in-person conventions and debates. But even before the pandemic pushed these operations online, some lawmakers were already seeking to monitor misinformation in political discourse on the internet, specifically in the context of deepfakes. Deepfakes are manipulated photo, video or audio clips generated by computers, often with the assistance of artificial intelligence algorithms. They can make someone appear to say or do something that never actually happened or create realistic images of people who do not exist.
Copyright bots, otherwise known as content recognition software, are automated programs that can analyze audio and video clips uploaded to a platform, then compare those Clips against a database of content provided by copyright owners to identify matches. The copyright owners can then review the identified matches to assess if they are actually copies, if they are authorized or not, and if any action is warranted. Some programs utilizing copyright bots offer their own enforcement procedures to customers, while other programs are partnered with law firms that will act on behalf of the copyright owners to enforce their rights, including sending demand letters and filing lawsuits.
Much like humans, bots come in all shapes and sizes. In social media networks, these bots can like what you post and even increase your followers. Companies use bots for all types of things—from booking a ride to giving makeup tutorials. Some bots can even solve your legal problems. Besides saving time and money, bots have the potential to reduce errors and increase a business’s customer base. But what happens when bots spy on users and share personal information? Or when they make racial slurs and offensive comments?
When it comes to photos destined for the web, I’d rather be behind the camera than in front of it. However, on a recent trip to Tokyo I was reminded that photos of me, and specifically my face, are often being captured and processed by systems that are increasingly being embedded in our modern life.