Counterfactuals, the buzzy term being touted in the latest AI news, assert some impressive promises for the tech world. But the general idea is nothing new. Counterfactual thinking describes the human drive to conjure up alternative outcomes based on different choices that might be made along the way—essentially it’s a way of looking at cause and effect. Counterfactual philosophy dates back to Aristotle, and along the way it’s been debated by historians and even written about by Winston Churchill in his 1931 essay, “If Lee Had NOT Won the Battle of Gettysburg.” In more recent years, scientists have found that it’s possible to translate such counterfactual theories into complex math equations and plug them into AI models. These programs aim to use causal reasoning to pore over mountains of data and form predictions (and explain their logic) on things like drug performance, disease assessment, financial simulations and more.
As generative AI quickly permeates every facet of modern life, regulators are beginning to address challenges with biases and the reliability of the tech. A big problem is that until now, AI has operated in a black box setting, meaning it’s able to spit out answers with ease, but the reasoning behind those answers is hidden. In the case of, say, somebody being denied a bank loan based on an AI algorithm, it would be important to understand why the program said no. Working backwards through the countless calculations made by AI is complicated but increasingly necessary. Counterfactual AI models look to address this conundrum by not only making accurate predictions, but also by explaining exactly what data point, or cause, led it to that calculation. Ahead, we look at several industries that could see big benefits from counterfactual applications in AI:
- Music Streaming. A team at Spotify is developing a machine learning program that would use counterfactual analysis to predict things like what song a specific user may want to hear next, versus current models that work off of an average user. The program might also guide artists on when to drop a new song, based upon an understanding of what drives listeners’ habits. Researchers here are among the first to spearhead a system that utilizes twin models to create a multi-purpose application. Simply put, their program links a model of a fictional world with a model of the real world, allowing any variable to be dropped in to investigate.
- Drug Safety. In drug testing, machine learning models are already used to sift through research and provide safety data, but sometimes mistakes are made and nobody, including the machine, can explain why. One chemical engineer hopes to improve upon that process by developing a new counterfactual method that will quickly tell researchers why a model made a prediction and whether it is valid. For example, this approach could tell medical professionals, to the smallest detail, why a molecule might be predicted to inhibit HIV in a new treatment. Understanding the “why” is a crucial component of safety in drug development.
- Cybersecurity. With the proliferation of artificial intelligence comes the risk of hackers finding new and creative ways to make mischief. Specifically, a professor at Arizona State University says that counterfactuals could thwart future cybercriminals who may one day hack into autonomous vehicles. He is working to build a system that teaches AI agents to ask “what if” questions and ultimately train them to distinguish normal behaviors from those of a malware, or Trojan horse, attack to a vehicle’s system.
- Health Diagnostics. In traditional diagnostics, rigorous observational studies are needed, and experiments are often siloed, making it difficult to cross-check from one set of findings to another without conducting another full-scale trial. AI counterfactual systems could make quick work of such data analysis and most importantly, could also explain which variables led to the diagnosis. A 2020 study suggested that when using a counterfactual-based program, AI could correctly identify diseases at least as often as doctors. The study, conducted by Babylon Health and University College of London, argues that the machine’s ability to explore every possible cause, no matter how far-fetched, paired with every potential outcome (whether it has happened in the real world or not) leads to more creative problem-solving and a high rate of accuracy.
As counterfactual technology in the AI world continues to evolve and improve upon itself, likely many more fields will find ways to put this predictive tool to use. Meta, Amazon, LinkedIn and TikTok are among those already racing to develop their own versions. With regulators looking much more closely at AI going forward, being able to explain and justify its decisions will be key. Counterfactual reasoning may prove to be a fundamental piece of the solution for the numerous sectors that are leaning into machine learning.
Earning Your Trust: The Need for “Explainability” in AI Systems