The sweeping use of facial recognition software across public and private sectors has raised alarm bells in communities of color, for good reason. The data that feed the software, the photographic technology in the software, the application of the software—all these factors work together against darker-skinned people.
As research continues to prove that AI is not an impartial arbiter of who’s who (or who’s what), various mechanisms are being devised to mitigate the collateral damage from facial recognition software.
We’ve written frequently about the distributed ledger technology (DLT) and the blockchain—on the interesting variations of the technology, its ability to bolster other technologies and its potential applications on everything from team giveaways to trading platforms (be they for cryptocurrency or energy commodities). In “Blockchain-Based Tokenization of Commercial Real Estate,” colleagues Josh D. Morton and Matt Olhausen examine the real-world application of tokenization—the process of representing a fractional ownership interest in an asset with a blockchain-based digital token—in commercial real estate.
We’ve previously touched on some of the issues caused by AI bias. We’ve described how facial recognition technology may result in discriminatory outcomes, and more recently, we’ve addressed a parade of “algorithmic horror shows” such as flash stock market crashes, failed photographic technology, and egregious law enforcement errors. As uses of AI technology burgeons, so, too, do the risks. In this post, we explore ways to allocate the risks caused by AI bias in contracts between developers/licensors of the products and the customers purchasing the AI systems. Drafting a contract that incentivizes the AI provider to implement non-biased techniques may be a means to limit legal liability for AI bias.
Say what you want about the digital ad you received today for the shoes you bought yesterday, but research shows that algorithms are a powerful tool in online retail and marketing. By some estimates, 80 percent of Netflix viewing hours and 33 percent of Amazon purchases are prompted by automated recommendations based on the consumer’s viewing or buying history.
But algorithms may be even more powerful where they’re less visible—which is to say, everywhere else. Between 2015 and 2019, the use of artificial intelligence technology by businesses grew by more than 270 percent, and that growth certainly isn’t limited to the private sector.
“One who invites another to his home or office takes a risk that the visitor may not be what he seems, and that the visitor may repeat all he hears and observes when he leaves. But he does not and should not be required to take the risk that what is heard and seen will be transmitted by photograph or recording, or in our modern world, in full living color and hi-fi to the public at large or to any segment of it that the visitor may select.” When Ninth Circuit Judge Shirley M. Hufstedler wrote these words in 1971 about surreptitious recordings made by newsmen, she probably had no idea that a global pandemic would give new meaning to her words.
You’re in the midst of doomscrolling, when you decide to take a mental health break and post a photo to your socials from a happier (pre-pandemic) time. As you search through your photos, you find a great one of yourself that a friend-of-a-friend took. You’re about to post the photo when you remember a post that you read on this very blog about the potential copyright consequences of using a photo taken by someone else. You aren’t a celebrity—yet—but you decide that it’s best to use a photo that you took yourself. A couple of minutes later you post a throwback selfie in which you are smiling as you proudly show off your very first tattoo. It took you days to decide on the design and hours for the tattoo artist to bring to life. Even today you still get compliments on it, and some people have even recognized you solely based on the fact that you have a very big and very prominent tattoo of Pegasus riding a dragon while eating rainbow sherbet and shooting lasers from a cat. Your post starts racking up likes from your friends (and followers)—when all of the sudden you get a DM from the tattoo artist informing you that she never authorized you to display her copyrighted work on social media and demanding that you take the photo down. Unfortunately, now you’ll be spending the rest of your evening trying to figure out how any rights your tattoo artist has in works permanently inked upon your body may impact your own rights to use (and license) your own likeness.
As we’ve discussed in this space previously, the effect of AI bias, especially in connection with facial recognition, is a growing problem. The most recent example—users discovered that the Twitter photo algorithm that automatically crops photos seemed to consistently crop out black faces and center white ones. It began when a user noticed that, when using a virtual background, Zoom kept cropping out his black coworker’s head. When he tweeted about this phenomenon, he then noticed that Twitter automatically cropped his side-by-side photo of him and his co-worker such that the co-worker was out of the frame and his (white) face was centered. After he posted, other users began performing their own tests, generally finding the same results.
In a recent social gathering, your friends took a number of photos and circulated it to the group. You see that one shot by a friend is a particularly great photo of you. You repost to your social media account to share with the world. It would generally be safe to assume that nothing will come of this, much less a copyright infringement lawsuit against you by your friend who took the shot. For celebrities, this is not always the case. In the past few years, there have been many lawsuits filed for copyright infringement by photographers and paparazzi against celebrities that reposted photos of themselves that they took off the internet.