It took the software industry decades, and a litany of high-profile breaches, to adopt the concept of privacy and security by design.
As machine learning (ML) adoption grows across industries, some ML initiatives have endured similar high-profile embarrassments due to model opacity. ML teams have taken the hint, and there’s a parallel concept that’s on a much faster track in the AI world. It’s responsible AI by design, and the ML community has already formed a strong consensus around its importance.
The ML industry is still growing and isn’t quite “there” yet, but business leaders are already asking how to increase profitability while maintaining ethical and fair practices that underpin responsible AI.
ML teams continue to optimize models by monitoring performance, drift and other key metrics, but in order to prioritize fair and equal practices, they need to add explainable AI (XAI) and AI fairness in their toolkit.
Like “privacy by design”, the push for responsible AI is compelled by far more than just emerging AI regulation. The significance of responsible AI starts with understanding why it matters, how it impacts humans, the business benefits, and how to put “responsible AI by design” into practice.
Cultural change for responsible AI
Successfully adopting responsible AI across the organization requires not only products and processes for data and ML model governance, but also a human-centric mindset for operationalizing ML principles into an appropriate MLOps framework, focusing on a cultural change for ML teams to prioritize and define ethical and fair AI.
Still, in engineering terms, that’s a vague definition, and the definitions of fairness and model bias remain controversial. There’s no standard way to quantify them with the kind of precision you can design around, but they’re critical nonetheless, so refining your definitions so the entire team can understand it is a good foundation for any project.
Embracing a human-centric approach is a good first step. Especially when your ML solution makes recommendations that directly impact people, ask the question “how might my ML model adversely affect humans?”
For example, ML recommendations are widely regarded as unfair when they place excessive importance on group affiliation (aka a ‘cohort’) of a data subject. That kind of bias is especially concerning for specific categories in society, like gender, ethnicity, sexual orientation, and disability. But identifying any cohort group that is inappropriately driving recommendations is important to realizing true fairness.
With no master playbook for identifying what’s fair or ethical, keep the following three topics in mind when designing your approach to responsible AI:
- Examples of high profile AI failures
- How biases adversely affect people in your use case
- Fairness requirements that regulators may mandate
You can draw direct lines between corporate governance, business implications, and ML best practices suggested by these topics.
Why responsible AI matters
As algorithmic decision-making plays an ever greater role in business processes, the ways that technology can impact human lives is a growing concern.
From hiring recommendations to loan approval, machine learning models are making decisions that affect the course of people’s lives. Even if you implement a rigorous model monitoring regime that follows model monitoring best practices, you must incorporate explainable AI and apply it as part of a strategy for ensuring fairness and ethical outcomes for everyone.
The widespread embrace of responsible AI by major platforms and organizations is motivated by far more than new AI regulations. Just as compelling are the business incentives to implement fairness, anti-bias, and data privacy.
Understand what’s at stake. Ignoring bias and fairness risks catastrophic business outcomes, damaging your brand, impacting revenue, and risking high profile breaches in fairness that may cause irreparable human harm.
Cases from Microsoft and Zillow provide some stark examples.
Microsoft’s chatbot mishap
While flaws in human-curated training data is the common culprit behind bias in ML models, it’s not the only one.
Early in 2016, Microsoft released a Twitter-integrated AI chatbot named Tay. The intent was to demonstrate conversational AI that would evolve as it learned from interaction with other users. Tay was trained on a mix of public data and material written specifically for it, then unleashed in the Twitter-verse to tweet, learn, and repeat.
In its first 16 hours, Tay posted nearly 100,000 tweets, but whatever model monitoring Microsoft may have implemented wasn’t enough to prevent the explicit racism and misogyny the chatbot learned to tweet in less than a day.
Microsoft shut Tay down almost immediately but the damage was done, and Peter Lee, corporate vice president, Microsoft Research & Incubations, could only apologize. “We are deeply sorry for the unintended offensive and hurtful tweets from Tay,” wrote Lee in Microsoft’s official blog.
What went wrong? Microsoft had carefully curated training data and tested the bot. But they didn’t anticipate the volume of Twitter users that would send it the bigoted tweets it so quickly began to mimic.
It wasn’t the initial training data that was flawed; it was the data it learned from while in production. Microsoft is big enough to absorb that kind of reputational hit, while smaller players in ML might not have it so easy.
Zillow’s costly bias
Bias doesn’t have to discriminate against people or societal groups in order to inflict damaging business outcomes. Take the real estate marketplace Zillow. They started using ML to “Zestimate” home values and make cash offers on properties in 2018.
But the model recommended home purchases at higher prices than it could sell them for, buying 27,000 homes since it was launched in April 2018 but selling only 17,000 through September 2021. How far off-target was the model? A Zillow spokesperson said it had a median error rate of only 1.9%, but that shot up to 6.7% for off-market properties – enough to force Zillow into a $304 million inventory write-down in Q3 2021 and a layoff of more than 2,000 employees.
The model preferred the cohort of “listed properties” to make accurate predictions. But would you consider that bias?
It’s important to understand how flaws in training data can produce bias that manifests in significant inaccuracies for one cohort. From a purely analytical perspective, stripping away societal implications, Zillow’s flawed model is analogous to a facial-recognition model preferring particular features or skin color to accurately identify someone in an image.
Both suggest a bias in training data that could have been identified with the right tools prior to deployment, and both illustrate that to the model data is data, and the implications of bias are entirely external, and dramatically different across use cases.
Coming AI regulations
Responsible AI practices are quickly becoming codified into international law, not only mandating fairness but stipulating a rigid framework that only increases the importance of using an AI Observability platform. The EU and the US are quickly implementing wide-ranging rules to compel model transparency, as well as the use of XAI tools to provide an explanatory audit trail for regulators and auditors.
The new rules rightly focus on the rights of data subjects, but more pointedly come with specific mandates for transparency and explainability.
Building on its General Data Protection Regulation (GDPR), the EU’s proposed Digital Services Act (DSA) requires that companies using ML provide transparency for auditors, including algorithmic insights into how their models make predictions.
In the U.S. the Consumer Financial Protection Bureau requires transparency from creditors who use ML for loan approval, and specifically the ability to explain why their models approve or deny loans for particular individuals. Additionally, the White House published an AI Bill of Rights, outlining a set of five principles and practices to ensure AI systems are deployed and used fairly and ethically.
Numerous other regulatory initiatives are in the works, targeting nearly every application of ML from financial services, social networks and content-sharing platforms, to app stores and online marketplaces. Among other commonalities, the new rules share a strict insistence on transparency for auditors, effectively making responsibility by design a de facto requirement for ML teams.
Setting organizational AI values
But if you’re leading an ML project, how do you get business decision-makers to buy into the value of responsible AI?
The points discussed above are precisely what the C-suite needs to know. But when decision-makers aren’t yet bought in on RAI, they’re often hearing these principles for the first time, and they’re listening hard for business-specific implications or how it affects the company’s bottom-line.
Responsible AI is frequently mischaracterized as a nuisance line-item driven by government regulation that pushes up project costs and increases demand on team resources. And it’s true that implementing fairness is not effortless or free, but the real message to leadership should be: “It’s not just that we have to do this”, but “it’s in our best interest because it aligns with our values, business growth, and long-term strategy”.
ML models are optimized for the short term (immediate revenue, user engagement, etc.); responsible AI drives long term metrics, at the cost of impacting short term metrics. Understanding this trade-off is key.
Fiddler CTO, Nilesh Dalvi, recalls, “When I was at Airbnb, the number of bookings was a key metric for the company. But we had a mission to optimize equal opportunity and unbiased experiences for all users, and it was clear that this would increase the number of bookings in the longer term.”
However it’s presented to them, leadership needs to understand that responsible AI is intimately connected to business performance, to socio-technical issues of bias prevention and fairness, and to the stringent regulations on data and ML governance emerging world-wide. The business case is straightforward, but the challenge is getting leadership to see the long play.
Quantifying this is even better but much harder. The C-suite leaders will know, you can’t manage what you can’t measure. So is it possible to quantify and manage responsibility? It turns out the right tools can help you do just that.
Putting responsible AI "by design" into practice
As a practical matter, there’s no such thing as responsible AI that isn’t “by design”. If it’s not baked into implementation from the beginning, by the time issues become urgent, you’re past the point where you can do something about them.
Models must evolve in production to mitigate phenomena like bias and model drift. To make such evolution practical involves source control, often co-versioning multiple models and multiple, discrete components in the solution stack, and repeated testing.
When models are retrained or when there’s a change in the training data or model, ML monitoring and XAI tools play an integral role in ensuring the model remains unbiased and fair across multiple dimensions and iterations.
In fact, during the MLOps lifecycle, multiple inflection points in every model iteration are opportunities to introduce bias and errors – and to resolve them. Addressing one issue with model performance can have unintended consequences in other areas. In software these are just regression bugs, but layers in an ML solution stack are linked in ways that make deterministic effects impractical.
To make responsible AI implementation a reality, the best MPM platforms offer accurate monitoring and explainability methods, providing practitioners the flexibility to customize monitoring metrics on top of industry standard metrics. Look for out-of-the-box fairness metrics, like disparate impact, demographic parity, equal opportunity, and group benefit, to help enhance transparency in your models.
Select a platform that helps you ensure algorithmic fairness using visualizations and metrics, and, importantly, the ability to examine multiple sensitive subgroups simultaneously (e.g. gender, race, etc.). You can obtain intersectional fairness information by comparing model outcomes and model performance for each sensitive subgroup. Even better, adopt tools that verify fairness in your dataset before training your model by catching feature dependencies and ensuring your labels are balanced across subgroups.
The time to be responsible is now
So when will organizations AI realize true "responsible AI by design"? Fiddler’s Krishnaram Kenthapadi says,
"I think that the onus is on us to embrace the challenge. Given the influence of members of the MLOps community and considering the variety of industries that we are all working on, we can generate more awareness about the need for responsible AI by design, and make this happen sooner than later."
As the AI industry experiences high profile “fairness breaches” similar to notorious IT privacy breaches costing companies millions in fines and brand catastrophes, we expect the pressure to adopt “responsible AI by design” will increase significantly, especially as new international regulations come into force.
That’s why adopting responsible AI by design and getting the right MLOps framework in place from the start is more critical than ever.