Krishna Gade, CEO and Founder of Fiddler, was recently featured on InsightFinder’s popular podcast about AI and the future of work. Host Dan Turchin (InsightFinder Advisor and CEO of PeopleReign) talked to Krishna about how humans are responsible for decisions made by machines, and it’s our responsibility to make sure algorithms are maintained, supported, and monitored. Speaking about Fiddler, Turchin said, “It’s the most mature platform I’ve seen — it tightly packages a lot of what we’ve all read in white papers and research into a scalable toolkit.”
Listen to the podcast yourself on InsightFinder, or read the highlights from Dan and Krishna’s conversation below.
How Fiddler got its start
Krishna founded Fiddler in 2018 to make AI explainable, inspired by his work as an engineering leader at Twitter, Pinterest, and particularly Facebook. “While I was leading the ranking platform team in 2016, Facebook was running a lot of complex machine learning models to predict what kinds of recommendations we should show, or what kinds of ads. These models were huge black boxes. We didn’t really know why they were making their predictions, and if an executive were to ask ‘Why am I seeing this ad?’, we wouldn’t be able to answer...or at least, not quickly enough.”
As a result, Krishna’s team built a lot of tools to monitor, debug, and explain ML models at scale. This not only helped developers build better models, but also created a sense of transparency across the organization. This success sparked the idea for Fiddler. “I saw that a company like Facebook could do it, but why not everyone else?” Fiddler’s goal is to build “a general platform that helps companies across the board build trustworthy AI products.” With Fiddler’s model performance management system, teams can diagnose performance issues and, most importantly, explain why the model made predictions.
A holistic tool to explain model predictions
Explainability might seem like an esoteric topic from AI research, but Fiddler has made it a practical tool that any data scientist or business stakeholder can use to understand their models’ behavior. With the proliferation of AI libraries, new types of models are always appearing. To ensure a solution that can generalize and scale, Fiddler uses “attribution-based algorithms” to probe models and understand the effect of different inputs on the outcome.
Imagine you have a model that predicts the risk of a loan application, based on factors like the loan amount requested, FICO score, or income. For a given data point, attribution-based algorithms repeatedly adjust these factors and see what happens to the outcome. Then they can see, statistically, how much each factor contributes to the result. For example, one person’s loan request might be increasing their risk by 20%. Another person’s income might be increasing their risk by 25%.
The final step is visualization. To build a tool that was user-friendly for many different types of models, Fiddler created a flexible UX that could accommodate various data inputs, such as structured vs. unstructured data.
Why Explainable AI matters
Data scientists often care about more precision and recall — and less about explainability. But explainability is increasingly important. “When you rewind a few years, people were building simple linear models or regression models,” Krishna said. These models were relatively easy to understand by looking at the weights of the input features. But as more complex models like deep neural networks or boosted decision trees have become commonplace, we’ve seen AI become much more powerful — and, at the same time, much riskier.
“When it fails, and if it fails for a certain demographic of users, if you don’t know how it works it becomes a big problem for the company’s reputation,” Krishna explained. A few years ago, Apple and Goldman Sachs were using machine learning to approve credit cards of users. But within the same household, men and women were seeing 10x differences in their lines of credit. When people complained, customer support said “We don’t know — it’s just the algorithm.” It became a big news story, and there was a regulatory probe into Goldman Sachs.
As Krishna said, “This is a case where you have machine learning failing, and the company may not know what’s going on or how this could have happened. In this case, if you had explainability tools or model monitoring tools, you can catch them early on when you’re training and testing these models. You may have had a high accuracy early on, but maybe there was a certain segment of the population that was being affected in a negative manner by the model, and that’s not captured by these high-level metrics. This is where explainability will give you a lens to look into the model and see how it’s performing across the board.”
The increase of regulations around AI
Governments are realizing the importance of regulating AI, with Europe ahead of America in this respect, having recently launched a GDPR-like regulation for AI. This regulation is based on the principle that “trust is a must”: if you’re building applications you need to be able to explain them, and you need to have certain processes in place like monitoring. In the US, the Algorithmic Accountability Act is currently under consideration in Congress.
Regulations are of growing importance because AI is touching lives at a scale that is unprecedented. Perhaps 20 years ago, AI was mostly used to show ads—and it’s not a big deal if someone is shown the wrong ad. But if you’re applying for a loan and your application is rejected by an AI, that’s a different story. “And what if this is happening at scale and it’s affecting certain kinds of people in a negative manner. because the way the AI is getting trained and the type of data being used is not fair?” These are the concerns from users, and it’s making governments think.
Where to learn more
For practitioners who want to make sure their AI is fair and able to meet regulatory requirements, there are a few things they can do to get started. Before you deploy a model, Krishna says, “run it against different examples. Look at users with different ethnicities, different backgrounds—and see how the model is performing against all those protected classes, and the intersections of the protected classes.”
Fiddler makes this easy by giving you a model scorecard to give you a holistic picture of how your model is performing, and understand for which segments it’s underperforming. Not only that, you can look into the model and explain why you’re seeing those outcomes.
“We’re at this inflection point where every software we’ve interacted with is going to be AI-based and model-based software,” Krishna said. “And it’s important to create transparency into these systems, both for ourselves and for our entire organization. It helps us build responsible products for our customers.”
At Fiddler, we’re very excited to bridge the gap between human and AI systems, so humans can build trust with AI. Request a demo. And if you’re interested in joining our team, check out our Careers page.