Close
Updated:

Facial Recognition, Racial Recognition and the Clear and Present Issues with AI Bias

As we’ve discussed in this space previously, the effect of AI bias, especially in connection with facial recognition, is a growing problem. The most recent example—users discovered that the Twitter photo algorithm that automatically crops photos seemed to consistently crop out black faces and center white ones. It began when a user noticed that, when using a virtual background, Zoom kept cropping out his black coworker’s head. When he tweeted about this phenomenon, he then noticed that Twitter automatically cropped his side-by-side photo of him and his co-worker such that the co-worker was out of the frame and his (white) face was centered. After he posted, other users began performing their own tests, generally finding the same results.

Twitter responded by stating that it had actually tested for bias before implementing the algorithm, and did not find any evidence of racial or gender bias. However, the spokesperson did not attempt to deny what Twitter’s users had found, instead promising to conduct more analysis and share the results.

AI bias happens in three broadly categorical ways:

  • Training AI algorithms with datasets that contain inherent biases;
  • Programing logic rules that incorporate unrecognized bias from the people designing the rules; and
  • Using the technology in ways or under circumstances that are inherently unfair.

For example, some critics have suggested that creating a dataset with photos taken in a studio with bright lighting and clear resolution may create unrealistic algorithmic criteria, which in turn causes unbalanced results during real-world use in natural lighting circumstances. But obtaining more realistic “in the wild” photos can be problematic as the easiest way to do so is to collect photos of people in circumstances, where it is difficult or impractical to obtain consent. Others have pointed out that datasets using white backgrounds create materially different contrast values between Caucasians and people of color.

The implications of AI bias are far more serious than whether your next video conference call will turn your coworker into a certain villain from The Legend of Sleepy Hollow. Research has already shown that even the best facial recognition software on the market consistently misidentifies people of color at higher rates than with Caucasians. Some of these systems are already used by government agencies like DHS, or by law enforcement. It was therefore only a matter of time before a facial recognition false positive contributed to the wrongful arrest of an African American. Even though the error in that case was eventually rectified, the man was held in custody for over 30 hours before being released on bail. When combined with biased policing, flawed facial recognition software could potentially exacerbate the disparate impact of law enforcement outcomes on persons of color.

Solving AI’s racial bias problem will not be easy or simple. Some experts have suggested that bias cannot ever be totally removed from artificial intelligence. Other research suggests that diversifying datasets and testing different algorithms does not necessarily solve the problem.

For businesses building AI platforms, as well as businesses looking to implement them, the first step toward a solution is recognizing the problem exists and understanding the different contexts where AI may help or hinder human biases. Experts also suggest ongoing monitoring of AI systems for reliability, as well as continuing bias research. Some have suggested that the AI industry itself should invest more heavily in diversifying its workforce.

From a legal perspective, we can expect the landscape to shift rapidly in the future as AI and facial recognition continue to develop. One area to watch is responsibility and accountability. When AI causes bad outcomes, will it be the developers who are held liable or the users of these systems? Questions like this one will have far-reaching implications for legislation, contract law, privacy and litigation. One thing is certain, as AI technologies become more sophisticated (not to mention cheaper), more businesses and governments will adopt it. And as long as our society is dealing with issues of race and equality, so, too, will our legal system and social structures be wrestling with AI bias.


RELATED ARTICLES

The Bias in the Machine: Facial Recognition Has Arrived, but Its Flaws Remain

Facial Recognition Technology May Soon Gain Some New Wrinkles

Digitalized Discrimination: COVID-19 and the Impact of Bias in Artificial Intelligence