As the use of biometric information such as fingerprints, iris scans, facial scans, and voice prints becomes more and more common, so, too, have the number of lawsuits brought for the unauthorized use of private information and for the violation of privacy laws—including class action lawsuits. In “The Duty to Defend a Privacy Claim Arises from Even Limited Publication of Biometric Identifiers,” our colleague Sandra Kaczmarczyk examines an important recent Illinois Supreme Court decision that is “likely to be at the forefront of future coverage litigation as other state courts grapple with the coverage afforded by business insurance policies for privacy claims.”
As part of our on-going coverage on the use and potential abuse of facial recognition AI, we bring you news out of Michigan, where the University of Michigan’s Law School, the American Civil Liberties Union (ACLU) and the ACLU of Michigan have filed a lawsuit against the Detroit Police Department (DPD), the DPD Police Chief, and a DPD investigator on behalf of Robert Williams—a Michigan resident who was wrongfully arrested based on “shoddy” police work that relied upon facial recognition technology to identify a shoplifter.
The sweeping use of facial recognition software across public and private sectors has raised alarm bells in communities of color, for good reason. The data that feed the software, the photographic technology in the software, the application of the software—all these factors work together against darker-skinned people.
As research continues to prove that AI is not an impartial arbiter of who’s who (or who’s what), various mechanisms are being devised to mitigate the collateral damage from facial recognition software.
As we’ve discussed in this space previously, the effect of AI bias, especially in connection with facial recognition, is a growing problem. The most recent example—users discovered that the Twitter photo algorithm that automatically crops photos seemed to consistently crop out black faces and center white ones. It began when a user noticed that, when using a virtual background, Zoom kept cropping out his black coworker’s head. When he tweeted about this phenomenon, he then noticed that Twitter automatically cropped his side-by-side photo of him and his co-worker such that the co-worker was out of the frame and his (white) face was centered. After he posted, other users began performing their own tests, generally finding the same results.
A sponsored post popped up on my Instagram last week that captured my obsession with statement jewelry and my periodic check on developments in facial recognition technology: “Artist Designs Metal Jewelry to Block Facial Recognition Software from Tracking You”. Statement jewelry? Check. An indication of how stressed out people are by facial recognition technology? I think so. While an experimental project, it’s not a far stretch to imagine the design actually being sold and purchased.
When it comes to photos destined for the web, I’d rather be behind the camera than in front of it. However, on a recent trip to Tokyo I was reminded that photos of me, and specifically my face, are often being captured and processed by systems that are increasingly being embedded in our modern life.