Posted

With Facial Recognition, Responsible Users Are the Key to Effective AI

Smiling black man in white t-shirt looking to the side (right). Surrounded by images evoking personal info like mapsAs part of our on-going coverage on the use and potential abuse of facial recognition AI, we bring you news out of Michigan, where the University of Michigan’s Law School, the American Civil Liberties Union (ACLU) and the ACLU of Michigan have filed a lawsuit against the Detroit Police Department (DPD), the DPD Police Chief, and a DPD investigator on behalf of Robert Williams—a Michigan resident who was wrongfully arrested based on “shoddy” police work that relied upon facial recognition technology to identify a shoplifter.

The allegations against DPD are deeply troubling. It is alleged that DPD fed a poor-quality screenshot from a surveillance video into a facial recognition program in order to generate a potential lead. The program “matched” the screenshot with an outdated driver’s license photo of Williams, which was then included as part of a six-person photo array and shown to a security contractor who picked Williams’ photo from the array. Importantly, Williams’ complaint states that the security contractor was not present at the store the day of the crime and based their identification solely on their review of the poor-quality surveillance video—a fact that was omitted from DPD’s warrant request. Furthermore, while DPD cited its use of a facial recognition program in its warrant request, it did not provide copies of the surveillance video, probe image or six-pack photo array. Essentially, in this case, everything was left up to the facial recognition software without any accurate cross-checking.

The magnitude of DPD’s facial recognition software misuse was further highlighted in a public admission by the DPD Police Chief that, “If we were just to use the technology by itself, to identify someone, I would say 96 percent of the time it would misidentify.” Even if such facial recognition software were to be used properly, there would still remain concerns that the software was trained on a dataset that is inherently biased—something that Williams’ complaint also identifies and provides as a basis for requesting an injunction from the Court “prohibiting Defendants from using facial recognition technology as an investigative technique so long as it misidentifies individuals at materially different rates depending on race, ethnicity, or skin tone.”

As with all technology, facial recognition software will get significantly better with time. But even so, like all things, it will never be perfect. It is therefore necessary that organizations and the people within those organizations understand the limitations, impacts and best practices associated with such tools. While DPD’s alleged use of its facial recognition software is exemplary of what not to do when using AI algorithms, there are companies that are taking proactive steps to encourage responsible AI use. One recent example of a company taking the initiative to understand the impacts of bias in software algorithms is Twitter, which recently announced the launch of a new initiative called “Responsible ML” that will conduct in-depth analyses and studies to assess the existence of potential harms in its machine learning algorithms, including:

  • A gender and racial bias analysis of its image cropping algorithm.
  • A fairness assessment of the timeline recommendations across racial subgroups.
  • An analysis of its content recommendations for different political ideologies across seven countries.

Twitter’s steps to study the use and impact of its machine learning algorithms should be lauded. These initial studies are also an important reminder that algorithmic biases may impact a variety of different groups (e.g., gender, race, political ideology) and that organizations need to be mindful of this fact. At its heart, AI is a tool. And it is incumbent on the user to understand and use that tool properly. Twitter’s Responsible ML program may end up being a model, or even a standard, for how organizations can gain the requisite understanding on how to use AI algorithms in an ethical and unbiased way. The most important takeaway from this is that organizations need to start asking these questions now if they want to reduce the risk of being on the receiving end of a lawsuit.


RELATED ARTICLES

Facial Recognition, Racial Recognition and the Clear and Present Issues with AI Bias

Retooling AI: Algorithm Bias and the Struggle to Do No Harm

Digitalized Discrimination: COVID-19 and the Impact of Bias in Artificial Intelligence

The Bias in the Machine: Facial Recognition Has Arrived, but Its Flaws Remain