This article is based on a presentation at the Fall 2023 Accounting & Finance Forum on Data with Dignity by Chad Dau, Associate Vice President – Decision Analytics and Optimization at Lilly.
AI-driven resource allocation is quickly becoming one of the most common applications of artificial intelligence in the business world, and for good reason. Leveraging data to make decisions about how resources are applied is tremendously attractive for organizations like pharmaceutical company Eli Lilly, that are continually looking for opportunities to make efficiency gains and optimize costs.
But AI-driven decision making isn’t exclusively upside, says Chad Dau, Associate Vice President of Decision Analytics and Optimization at Lilly. Introducing AI into decision-making processes poses an entirely new set of risks, and organizations may not be aware of the growing challenges for responsible use of AI.
“AI modeling tends to focus on short-term impacts and proximal drivers, and leaves out the complexity of interactions between decision makers,” says Dau. “When those things are ignored, you can build biases into your models that affect people’s lives and health.”
AI can inadvertently steer companies toward decisions that will improve the short-term bottom line but hinder long-term growth—or drive behaviors that don’t align with company values or culture. At its worst, AI can unknowingly reinforce existing systemic prejudices.
In his work in advanced analytics and data sciences at Lilly, Dau and his team build AI-driven models that attempt to accurately portray the complexity of the world and proactively prevent the inherent bias that can be perpetuated by the technology.
Leveraging AI-Driven Decision Making
Marketing spend is a useful example to understand how an AI-driven decision-making model works. In a traditional human-centric process, marketing dollars are allocated based on a financial feedback loop: a decision about spend is made, the decision is executed, revenue is impacted positively or negatively, and that information informs future decisions with an added layer of human discernment that understands and accounts for context and nuance.
In an AI-driven process, a model is built based on an available data set, the AI triggers a suggestion for marketing spend, the suggestion is executed, revenue is impacted positively or negatively, the financial data feeds back into the model to trigger more suggestions, the machine learning gets smarter, and so on.
But if the model produces an erroneous assumption, or the data the model was built on is inherently biased in some way, the entire system can become prejudiced as the machine learning is trained on biased data over time. Two primary examples of this phenomenon are referred to as AI operational bias and framing bias.
Identifying Bias in AI
AI operational bias, or cost-cutting bias, can creep into a model when a business attempts to make efficiency gains or outsource a business function. Often, a backlog of cases or the cost of human-in-the-loop review for gray-area outputs (think things like flagged transactions from expense reports), can push businesses to outsource.
Shortages in expertise lead to smaller, less diverse human feedback to train the model—and less diverse, less experienced human feedback that’s divergent from an organization’s culture can shift the algorithm.
“You have to ask yourself, ‘What biases are you bringing in when you start outsourcing or trying to make things more efficient?’” says Dau. “There’s a great deal of gray area in business that needs human review, and data can be going into the algorithm that you don’t want to see. Or the model may not be catching anomalies and then it never learns.”
Framing bias describes the situations where a model doesn’t account for all stakeholders and competing interests within a complex system.
To return to the marketing example, if Lilly wanted to model revenue projections for the launch of an infusion medication, dozens of factors might determine whether a patient ultimately receives the drug they need—the recommendation of the physician and patient’s level of trust in the provider, insurance hurdles, how much it costs, their ability to take time off work to get the treatment, new evidence or promotional activities, and the influence or their peers, just to name a few.
Taking even these factors into account, an AI algorithm may advise pulling investments away from an inner city community or lower income area because projected revenue is low. But having additional context might show that the area is actually a transportation or healthcare desert—and that patients aren’t getting the medication they need because of existing systemic issues like inadequate access to transportation or an infusion center. Without that nuance, the AI would encourage decision making that’s contrary to Lilly’s long-term organizational values and would encourage them to overlook opportunities for valuable initiatives like a patient transportation assistance program.
“If you don’t take complexity into account and only measure proximal effects, you can truly make the world a worse place to be,” emphasizes Dau.
Mitigating Bias with Responsible AI
AI-driven decision making should not be used as a quick fix to make efficiency gains or cut costs without responsible planning and oversight. Data scientists should take the time to map out an entire system in all its complexity to understand the implications of abstraction before they ever begin building the model, says Dau.
“By abstracting out, are you creating inequity? Are you funding something that could be encouraging inequity? That matters,” says Dau.
Additionally, data scientists must be cognizant of how data was collected.
“I guarantee you that every data set you work with is going to be biased, and it might not be clear how that is. If you don’t figure it out and take steps to mitigate bias you’re going to find out the hard way.”
Leave a Reply