Track Fairness and Bias in LLM and ML Models with Fiddler AI Observability
Learn how to monitor and track fairness and bias in your AI models using Fiddler’s AI Observability platform.
In this product tour, we show how to leverage existing model data to create segments based on protected attributes like gender and race, and define intersectional segments for in-depth fairness analysis. See how Fiddler helps you set up industry-standard metrics — like demographic parity and disparate impact — track group benefits, and monitor compliance over time. Build comprehensive dashboards to help your team stay on top of fairness requirements and detect bias in real-time, ensuring your AI models serve all users equitably.
[00:00:00] I'm here to share with you how you can use Fiddler to track fairness and bias in your model using the Observability data you are already pushing to the Fiddler platform for tracking your model performance, drifts, etc. We do this by leveraging the existing metadata in your model to highlight what are the different outcomes for different groups like gender, race, or a combination of those as an intersectional identity.
[00:00:26] And helping your team track when your model is in compliance and being fair to users and when it's not. So your team can receive alerts or report on these issues as they happen.
[00:00:37] So, we do this, like I said, by leveraging your model's own existing data.
[00:00:43] If you bring something like a protected attribute like gender, race, geography about the user, something you have to track for their fairness already as metadata to Fiddler, that can be converted into a segment using the Fiddler platform itself.
[00:00:58] You see I have segments for race and gender here. I can even create an intersectional segment that combines the two and make that a part of my tracking as well.
[00:01:08] Further, to enhance this, we generate or define custom metrics in the Fiddler platform, which are industry standard metrics like group benefit, demographic parity, disparate impact, which can take different outcomes for users and create aggregate scores that are required in most fairness reporting frameworks.
[00:01:27] Once you have these metrics defined, these segments created, and your model data already pushed to Fiddler, what you can do is start putting these together as charts to track this over time.
[00:01:39] Here I would select one of the custom metrics that I had defined, let's say group benefit, and I'll choose a specific set of segment I want to track this on. Let's say the gender male in this case. I could easily build out more queries that would reflect the group benefit for another class in my group that is females, and I can track the difference over time to see exactly which group is either doing better or worse.
[00:02:04] And by putting charts like these together, I can create a comprehensive dashboard view for my fairness and bias tracking teams to make sure that our organization and our models are always in compliance.