As previously discussed, financial services regulators are increasingly focused on how businesses use artificial intelligence (AI) and machine learning (ML) in underwriting and pricing consumer finance products. Although algorithms provide opportunities for financial services companies to offer innovative products that expand access to credit, some regulators have expressed concern that the complexity of AI/ML technology, particularly so-called “black box” algorithms, may perpetuate disparate outcomes. Companies that use AI/ML in underwriting and pricing loans must therefore have a robust fair lending compliance program and be prepared to explain how their models work.
Is Russia’s invasion of Ukraine altering the landscape of the internet? Can AI help historians decipher ancient texts? How did two siblings allegedly use a digital token to defraud investors? Explore this and more in today’s News of Note.
Regulators at the state and federal level are increasing their scrutiny of businesses’ use of artificial intelligence (AI). For example, recently, representatives from the Office of the Comptroller of the Currency and the New York State Department of Financial Services discussed the need for developing additional AI guidance.
With artificial intelligence (AI) becoming more and more embedded in our everyday lives, there has been a corresponding need for regulations that foster AI development and adoption in a responsible manner. The question is how governments should approach such regulation.
As the use of biometric information such as fingerprints, iris scans, facial scans, and voice prints becomes more and more common, so, too, have the number of lawsuits brought for the unauthorized use of private information and for the violation of privacy laws—including class action lawsuits. In “The Duty to Defend a Privacy Claim Arises from Even Limited Publication of Biometric Identifiers,” our colleague Sandra Kaczmarczyk examines an important recent Illinois Supreme Court decision that is “likely to be at the forefront of future coverage litigation as other state courts grapple with the coverage afforded by business insurance policies for privacy claims.”
It might be a little meta to have a blog post about a blog post, but there’s no way around it when the FTC publishes a post to its blog warning companies that use AI to “[h]old yourself accountable—or be ready for the FTC to do it for you.” When last we wrote about facial recognition AI, we discussed how the courts are being used to push for AI accountability and how Twitter has taken the initiative to understand the impacts of its machine learning algorithms through its Responsible ML program. Now we have the FTC weighing in with recommendations on how companies can use AI in a truthful, fair and equitable manner—along with a not-so-subtle reminder that the FTC has tools at its disposal to combat unfair or biased AI and is willing to step in and do so should companies fail to take responsibility.
As part of our on-going coverage on the use and potential abuse of facial recognition AI, we bring you news out of Michigan, where the University of Michigan’s Law School, the American Civil Liberties Union (ACLU) and the ACLU of Michigan have filed a lawsuit against the Detroit Police Department (DPD), the DPD Police Chief, and a DPD investigator on behalf of Robert Williams—a Michigan resident who was wrongfully arrested based on “shoddy” police work that relied upon facial recognition technology to identify a shoplifter.
Interactive online platforms have become an integral part of our daily lives. While user-generated content, free from traditional editorial constraints, has spurred vibrant online communications, improved business processes and expanded access to information, it has also raised complex questions regarding how to moderate harmful online content. As the volume of user-generated content continues to grow, it has become increasingly difficult for internet and social media companies to keep pace with the moderation needs of the information posted on their platforms. Content moderation measures supported by artificial intelligence (AI) have emerged as important tools to address this challenge.