While we’ve devoted ample time to discussing areas of potential concern regarding the application of algorithms—and algorithm bias in particular—it’s also a good time to remember algorithmic technology is poised to make our lives better, often in ways we’ll never know about.
Articles Posted in Algorithms
Driving Questions, Human Error and Growth Industries for Machine Learning
One of the biggest obstacles self-driving cars have to get around is the one between our ears. Even as these vehicles are hitting the streets in pilot projects, three out of four Americans aren’t comfortable with the idea of their widespread use.
“Dirty by Nature” Data Sets: Facial Recognition Technology Raises Concerns
The sweeping use of facial recognition software across public and private sectors has raised alarm bells in communities of color, for good reason. The data that feed the software, the photographic technology in the software, the application of the software—all these factors work together against darker-skinned people.
About Face: Algorithm Bias and Damage Control
As research continues to prove that AI is not an impartial arbiter of who’s who (or who’s what), various mechanisms are being devised to mitigate the collateral damage from facial recognition software.
How Can Customers Address AI Bias in Contracts with AI Providers?
We’ve previously touched on some of the issues caused by AI bias. We’ve described how facial recognition technology may result in discriminatory outcomes, and more recently, we’ve addressed a parade of “algorithmic horror shows” such as flash stock market crashes, failed photographic technology, and egregious law enforcement errors. As uses of AI technology burgeons, so, too, do the risks. In this post, we explore ways to allocate the risks caused by AI bias in contracts between developers/licensors of the products and the customers purchasing the AI systems. Drafting a contract that incentivizes the AI provider to implement non-biased techniques may be a means to limit legal liability for AI bias.
Retooling AI: Algorithm Bias and the Struggle to Do No Harm
Say what you want about the digital ad you received today for the shoes you bought yesterday, but research shows that algorithms are a powerful tool in online retail and marketing. By some estimates, 80 percent of Netflix viewing hours and 33 percent of Amazon purchases are prompted by automated recommendations based on the consumer’s viewing or buying history.
But algorithms may be even more powerful where they’re less visible—which is to say, everywhere else. Between 2015 and 2019, the use of artificial intelligence technology by businesses grew by more than 270 percent, and that growth certainly isn’t limited to the private sector.
Facial Recognition, Racial Recognition and the Clear and Present Issues with AI Bias
As we’ve discussed in this space previously, the effect of AI bias, especially in connection with facial recognition, is a growing problem. The most recent example—users discovered that the Twitter photo algorithm that automatically crops photos seemed to consistently crop out black faces and center white ones. It began when a user noticed that, when using a virtual background, Zoom kept cropping out his black coworker’s head. When he tweeted about this phenomenon, he then noticed that Twitter automatically cropped his side-by-side photo of him and his co-worker such that the co-worker was out of the frame and his (white) face was centered. After he posted, other users began performing their own tests, generally finding the same results.
Digitalized Discrimination: COVID-19 and the Impact of Bias in Artificial Intelligence
As the world grapples with the impacts of the COVID-19 pandemic, we have become increasingly reliant on artificial intelligence (AI) technology. Experts have used AI to test potential treatments, diagnose individuals, and analyze other public health impacts. Even before the pandemic, businesses were increasingly turning to AI to improve efficiency and overall profit. Between 2015 and 2019, the adoption of AI technology by businesses grew more than 270 percent.
Protecting Elections: Regulating Deepfakes in Politics
The 2020 election will be unprecedented in many respects. More people will be voting by mail, and there will likely be more democratic participation online than ever before. Internet platforms and communication services will become more influential forums as people are restricted from in-person conventions and debates. But even before the pandemic pushed these operations online, some lawmakers were already seeking to monitor misinformation in political discourse on the internet, specifically in the context of deepfakes. Deepfakes are manipulated photo, video or audio clips generated by computers, often with the assistance of artificial intelligence algorithms. They can make someone appear to say or do something that never actually happened or create realistic images of people who do not exist.
The Rise of the Copyright Bots
The copyright bots have been unleashed, they have a mind of their own, and there is little that can be done to stop them.
Copyright bots, otherwise known as content recognition software, are automated programs that can analyze audio and video clips uploaded to a platform, then compare those Clips against a database of content provided by copyright owners to identify matches. The copyright owners can then review the identified matches to assess if they are actually copies, if they are authorized or not, and if any action is warranted. Some programs utilizing copyright bots offer their own enforcement procedures to customers, while other programs are partnered with law firms that will act on behalf of the copyright owners to enforce their rights, including sending demand letters and filing lawsuits.