Posted

Artificial Intelligence and Regulatory Compliance

Algorithm-ai-box-460688633-300x220Regulators at the state and federal level are increasing their scrutiny of businesses’ use of artificial intelligence (AI). For example, recently, representatives from the Office of the Comptroller of the Currency and the New York State Department of Financial Services discussed the need for developing additional AI guidance.

Banks, insurers and other financial institutions should monitor these developments closely and consider their compliance plans now, as faulty data can warp AI models, leading to business decisions that violate consumer protection laws.

Regulators are regularly focusing on two key themes relating to the use of AI products in financial services: (1) the requirement to explain how a decision was reached when using an algorithm, and (2) the ability to audit the algorithm itself. These ideals are particularly challenging for “black box algorithms,” those models that deploy multiple layers of neural networks as part of an AI ecosystem. The layers make deciphering how an exact decision was reached nearly impossible, but as we explain below, output review is achievable.

Wolves, Huskies and a Lesson Learned
There is a risk that AI could result in violations of consumer protection laws. A University of Washington study illustrates this point. In this study, the researchers developed a simple AI algorithm to determine whether a photo depicted a wolf or a husky. In order to “teach” the distinction between the two animals, the researchers trained the algorithm with many photos, some of which showed huskies and some of which showed wolves. Ultimately, when the algorithm was tested for accuracy, the results were poor as some very obvious photos were mislabeled. After a root cause analysis, the researchers realized the algorithm had been learning from the background of each photo and not the animal itself. Rather than learning what huskies and wolves looked like, the AI algorithm had been learning that if snow was in the picture, then it would identify the animal as a wolf. If no snow, then a husky.

This lighthearted example underscores a serious point. What if instead of snow, wolves and huskies, a bank’s AI system learned that certain zip codes approximated a person’s race and used that determination as part of its loan decision making? What if an insurance company’s model learned to use similar information to take into account race in making coverage determinations? A husky will not lodge a consumer complaint, but someone harmed by a negative insurance coverage or loan decision will. If the training data is not carefully scrutinized, ideally by different stakeholders with different experiences and expertise, flawed results can occur. The husky/wolf dilemma underscores how innocent intentions can still produce warped results, and in a consumer setting, this can have serious consequences for customers and trigger significant financial and reputational harm for vendors.

A Sustained Effort
Rigorous evaluation of the training data is one approach, but continued evaluation of results once the algorithm is in a production environment is also key. But how can a financial institution review outputs when the process is obfuscated by multiple layers of neural networks?

Mathematical models, specifically LIME (Local Interpretable Model-agnostic Explanations) and SHAP (Shapley Additive exPlanations), are two tools that can be used as a check on outputs from black box algorithms. While they will not perfectly mimic the algorithm itself, the models will produce rough approximations of expected outcomes. The models take known inputs and through mathematical analysis, estimate the results. If the model’s estimations diverge wildly from what the algorithm is producing, the algorithm likely needs to be studied further. This warns business teams, compliance personnel and other employees engaged in AI deployment to “check their work.”

This check must be comprehensive. A study of the training data, whether any recent anomalies in the inputs might account for the divergence, and other lines of inquiry must be pursued. When delivering a speech on algorithms and economic justice, FTC Commissioner Rebecca Slaughter guided that “as an enforcer, I will see self-testing as a strong sign of good-faith efforts at legal compliance….” Output reviews and evaluation of training data may provide regulators with enough comfort to turn their investigative attention away from a company.

Financial services companies using AI should nevertheless prepare for regulatory scrutiny by building a robust and well-tailored compliance program. Director of the Consumer Financial Protection Bureau, Rohit Chopra, has long signaled that he is skeptical about the use of algorithms by banks and other lenders in underwriting and advertising decisions. He recently stated that “Algorithms can help remove bias, but black box underwriting algorithms are not creating a more equal playing field and only exacerbate the biases fed into them… If we want to move toward a society where each of us has equal opportunities, we need to investigate whether discriminatory black box models are undermining that goal.” Director Chopra also previously expressed concern while serving as an FTC Commissioner that machine learning and other predictive technology can produce proxies for race and other protected classes. Companies that implement explainable models and regularly validate those models through ongoing monitoring and analysis of outcomes will be best situated to respond to, and ward off, regulators’ inquiries.

Conclusion
Companies are understandably enthused about the analytical insights and efficiencies that may be gained by deploying AI. However, regulators are watching, and companies implementing AI should craft a robust compliance framework to proactively address potential regulatory scrutiny. Mathematical models are among the tools that can help explain decisions and audit results and can serve as an important component of a sound compliance program.


RELATED ARTICLES

The FTC Urges Companies to Confront Potential AI Bias … or Else

With Facial Recognition, Responsible Users Are the Key to Effective AI

Taking Care of the Data in Proptech