AI Observability for ML Models

Table of content

Explore Fiddler’s unified platform for AI Observability and how it provides visibility into ML models performance, accuracy, and behavior, from tracking performance, prediction drift to setting custom alerts for model diagnostics. 

In this product tour, see how Fiddler supports a range of ML models, including binary classification, regression, and ranking. Learn how to build fully customizable dashboards, interact with key metrics like accuracy and revenue impact, and perform root cause analysis for any data point, empowering your team with a complete view of model health.

Thumbnail image for product tour video titled 'Fiddler AI Observability for ML Models'
Video transcript

[00:00:00] Fiddler offers best in class observability and visibility into your machine learning model performance. Fiddler supports all types of traditional ML models, including binary classification, multi class classification, regression, ranking, and many more. Let me show you around the Fiddler platform here.

[00:00:16] I am first navigated into a Fiddler dashboard for a particular traditional ML model, that's a binary classification model we refer to as bank churn. This provides the probability of a customer that may churn from a bank. And in this use case, I was able to configure a dashboard that's more specific to business users. These dashboards are fully configurable and can be shared with your teams as needed.

[00:00:38] A few examples of some charts and reports we have here. We have revenue impact by state, which is looking at a customized metric that was configured here within the Fiddler platform to estimate a revenue impact of a customer churning. We also see accuracy, which compares the predicted target for the model to the ground truth label, so you have a very clear picture of accuracy of your model performance.

[00:00:58] We also see other types of reports here that we may want to customize and configure for those different business users. Let's dive into this prediction drift chart to get a little bit more detail into what drift metrics we can calculate. All of these charts are completely configurable, drag drop click type of functionality, so I can come in create any sort of chart that's important for me. These charts are also interactive.

[00:01:20] So, for example, if I want to dive into this specific data point for October 1st, I can then get into root cause analysis. In this case, we're comparing a Jensen Shannon Distance metric calculation for drift. We also support population stability index and a number of different baseline options for what we want to compare that drift for.

[00:01:40] In this case, we're seeing a prediction drift impact for all of our feature sets, and we see this feature of number of products is really kind of driving the drift in particular. Fiddler also supports data integrity violations and tracking in case there's any kind of data issues with your model, as well as additional analysis for charts such as confusion matrix, ROC, precision-recall, and calibration plot.

[00:02:02] Again, this is all specific to that one data point I clicked on within that chart, so you can get that root cause analysis of what's actually happening. Additionally, if you want to dive into the actual data and inference logs, Fiddler can provide that for you as well. So no more do your teams have to go be technical and dig into all the databases and inference logs that you have within your systems.

[00:02:24] It's all available for you right here within the Fiddler platform. All of these charts and graphs can also be set up as alerts, so you can be triggered and notified in case anything is occurring outside of your expected tolerance threshold and take action very quickly.