Posted

About Face: Algorithm Bias and Damage Control

GettyImages-1269726290-scaled-e1606147822524-300x219As research continues to prove that AI is not an impartial arbiter of who’s who (or who’s what), various mechanisms are being devised to mitigate the collateral damage from facial recognition software.

Legislation: Since 2019, several bills have been introduced in the House or Senate to address privacy issues and algorithm bias associated with facial recognition software, including the Commercial Facial Recognition Privacy Act, the Ethical Use of Facial Recognition Act, and the Facial Recognition and Biometric Technology Moratorium Act. While none of these bills has moved forward in the current congressional quicksand, their existence gives us hope for more legislative momentum in the future.

Technology: If you can’t beat it, block it. That was the idea behind a pair of glasses, developed by Japan’s Institute of Informatics, that uses near-infrared light to prevent facial recognition by smartphone and tablet cameras. The concept inspired artist Ewa Nowak to design Incognito, a line of minimalist masks that block facial recognition software in public and on social media.

Remediation: Any discussion of fixing algorithm bias should begin with the standard argument that it’s fundamentally unfixable. Karen Yeung, a professor at the University of Birmingham Law School, in the United Kingdom, puts it well:

“How could you eliminate, in a non-arbitrary, non-subjective way, historic bias from your dataset? You would actually be making it up. You would have your vision of your ideal society, and you would try and reflect it by altering your dataset accordingly, but you would effectively be doing that on the basis of arbitrary judgment.”

That said, the problems specific to facial recognition software seem more straightforward, and therefore potentially easier to fix, than other types of algorithm bias. For example, we know—and should be able to correct for the fact—that lighter-skinned people account for the vast majority of images (perhaps as high as 81 percent) that train this software. We also should be able to recalibrate photographic technology that’s been optimized for lighter skin.

Humans are at the cause of algorithm bias, and humans can help mitigate it, by keeping the problem front of mind from development to application.


RELATED ARTICLES

Retooling AI: Algorithm Bias and the Struggle to Do No Harm

How Can Customers Address AI Bias in Contracts with AI Providers?

The Bias in the Machine: Facial Recognition Has Arrived, but Its Flaws Remain

Lawmakers (and Artists) Fight Those Facial Recognition Frown Lines