Pricing plans

Drive Business Value with Responsible AI at Scale

Your journey to being an AI-first company starts here. 
Lite

For individual practitioners: Build ML and LLMOps with monitoring and analytics.

Key features:
  • Monitor model performance, drift, data integrity, and more
  • Basic root cause analysis
  • Communication and support essentials
  • Security essentials
Business

For teams: Align your AI solutions to business KPIs with advanced ML and LLM monitoring.

Everything in Lite, plus:
  • Advanced model analytics 
  • Assess model fairness and bias
  • Role based access control and SSO
  • Named Customer Success Manager
  • Engineering and Data Science services
Premium

For enterprises: Accelerate ML and LLMOps strategies with flexible SaaS or On-Premise deployment options.

Everything in Business, plus:
  • Cloud and On-Premise deployment options
  • White-glove support and services to support tailored AI deployment requirements available
  • Customized onboarding and solution success
  • Dedicated communication channels
Industry Leaders’ Choice for AI Observability

Reduce costs and increase efficiencies by tracking how all your AI deployments are performing. 

Mitigate model performance issues before they impact your business.

All plans

Gain actionable insights from your models to make data-driven business decisions.

Pinpoint specific areas to optimize by drilling down on segments of interest with powerful and rich custom reports. Align the entire organization by connecting model insights to business outcomes.

All plans

Build trust into AI.

Make better business decisions with increased context and visibility into your model’s behavior and outcomes.

Premium
Limits apply

Build transparent, accountable, and ethical practices for your business with responsible AI.

Increase visibility in AI governance with continuous monitoring, while detecting and mitigating bias in datasets and models.

Business
Premium

AI initiatives Security and compliance at the forefront of your AI initiatives.

Fiddler takes care of your data security with SOC 2 and HIPAA compliance.

All plans
Limits apply

Experience hands-on onboarding and enablement to launch models into production.

Receive white-glove support from designing and implementing your deployment strategy to successfully onboarding and configuring your models.

All plans
Limits apply

AI Engineering and Data Science services to supplement and optimize your deployment.

  • AI Observability integration strategy and architecture
  • Generative AI/LLM and ML model deployment framework
  • Responsible AI and MRM framework design and best practices
Business
Premium
Limits apply
Platform pricing methodology

Grounded in transparency. We keep it simple and you only pay for what you need. 

Data (GB)

No two models are the same. Why should their predictions cost the same? With data, you pay based on model inputs.

Models

Select the number of models to be monitored and analyzed.

Explanations

Choose the number of explanations you need to understand model predictions. 

Your price

Gain full context on your models and understand model outcomes. Pay only for what you need to scale your ML and LLM initiatives.

Compare Plans

Features
Lite
Ideal for individual practitioners  launching AI efforts with basic use cases.
Request demo
Business
Ideal for teams using AI Observability to launch production use cases.
Request demo
Premium
Ideal for AI-forward enterprises delivering business-critical models and building responsible AI. 
Contact Sales

Deployment

SaaS
Fiddler Cloud is the simplest way to monitor and analyze models and reports in a single pane of glass
On-Premise
Fiddler deployed in your own environment

Scale

Models
Models registered on Fiddler for pre or post deployment monitoring
Up to 10
Custom
Custom
Explanations
Explanations provided by Fiddler
Custom
Features
Features tracked in Fiddler
Up to 500
Unlimited
Unlimited
User seats
Number of collaborators that can log into Fiddler
Up to 10
Unlimited
Unlimited

Data

Raw data retention
The time your raw data is retained in Fiddler and only applicable for managed cloud installations
3 months
Custom
Custom
Streaming and batch data
Velocity of data ingested into Fiddler
Tabular, image, text data
Types of data monitored in Fiddler

Core Capabilities

Model observability summary
A centralized UI providing a unified view of the health of all ML models
Dashboards
Collection of reports for team collaboration
Insights
Create custom reports and perform root cause analysis on underperforming models
Alerts
Set thresholds and receive real-time alerts on model performance, drift, data integrity, and traffic issues
Alerts summary
A centralized view of all alerts

Monitoring

Performance monitoring
Track your model's performance and accuracy with out-of-the-box metrics
Performance monitoring
Track your model's performance and accuracy with out-of-the-box metrics
Drift monitoring
Drift monitoring
Feature drift
Detect feature drift using Jensen-Shannon divergence (JSD) or Population Stability Index (PSI) metrics
Feature drift
Detect feature drift using Jensen-Shannon divergence (JSD) or Population Stability Index (PSI) metrics
Prediction drift impact
Measure how much a feature impacts model predictions
Prediction drift impact
Measure how much a feature impacts model predictions
Data distribution
Compare baseline and production data to behavior shifts in data
Data distribution
Compare baseline and production data to behavior shifts in data
Imbalanced data
Detect even the slightest anomaly in highly imbalanced datasets
Imbalanced data
Detect even the slightest anomaly in highly imbalanced datasets
Data integrity monitoring
Track issues in your pipeline from missing values, range or data type violations
Data integrity monitoring
Track issues in your pipeline from missing values, range or data type violations
NLP and CV monitoring
Monitor models with unstructured data like text and images
NLP and CV monitoring
Monitor models with unstructured data like text and images
Segment monitoring
Monitor segments or cohorts of data
Segment monitoring
Monitor segments or cohorts of data
Custom metrics
Create and monitor custom metrics unique for your use case
Custom metrics
Create and monitor custom metrics unique for your use case
LLM application monitoring
Monitor LLM-based applications
LLM application monitoring
Monitor LLM-based applications
Fiddler Trust Service
Fast, scalable, safe, and cost-effective LLM metrics monitoring: hallucination (faithfulness, relevancy, coherence), safety (toxicity, PII, jailbreak), and operational (cost, latency, tokens)
Fiddler Trust Service
Fast, scalable, safe, and cost-effective LLM metrics monitoring: hallucination (faithfulness, relevancy, coherence), safety (toxicity, PII, jailbreak), and operational (cost, latency, tokens)
Embeddings monitoring
Monitor LLM-based embeddings from LLM providers or private LLMs
Embeddings monitoring
Monitor LLM-based embeddings from LLM providers or private LLMs

Explainability

Explainability methods
Use out-of-the-box standard explainability methods: SHAP, Fiddler SHAP, Integrated Gradients, and more
Custom
‘What-If’ analysis
Get real-time insights on model predictions by tuning values and studying the impact on scenario outcomes
Custom
Global explanations
See how each feature contributes to a model’s prediction
Custom
Local explanations
Uncover the root cause of an individual issue
Custom
Surrogate-based explanations
Use Fiddler’s surrogate models to understand model behavior
Custom
Custom explanations
Bring your own explainers into Fiddler for advanced explanations
Custom
Artifact-based explanations
Upload and explain your own models for faithful explanations
Custom
GPU accelerated explanations
Obtain faster explanations with GPU upgrade
Custom

Analytics

Visualize model behavior
Analyze feature impact, correlation, distribution, PDP charts, and more
Root cause analysis
Discover under performing segments
Ad-hoc model analysis
Perform an exploratory or targeted analysis of model behavior
3D UMAP visualization
Analyze data patterns or outliers from a 3D Unifold Manifold Approximation and Projection (UMAP)

Fairness

Algorithmic bias detection
Detect algorithmic bias using powerful visualizations and metrics
Intersectional bias detection
Examine multiple dimensions simultaneously
Model fairness
Compare model outcomes and model performance for each subgroup of interest
Data fairness
Check for fairness in your dataset before training your model
Fairness metrics
Out-of-the-box fairness metrics, such as disparate impact, demographic parity, equal opportunity, and group benefit, for greater model transparency

Security and Compliance

API access
SOC 2 certified
HIPAA compliance
SAML single sign-on (SSO)
Let users securely authenticate with their central credentials for your organization
Role based access control (RBAC)
Users across the organization receive level-specific permissions to access protected environments

Onboarding, Enablement, and Support

Onboarding services
Tooltip icon
Onboarding services
✓ Structure depends on deployment option
✓ Structure depends on deployment option
✓ Structure depends on deployment option
Documentation and knowledge base access
Tooltip icon
Documentation and knowledge base access
Support desk access (24/7)
Tooltip icon
Support desk access (24/7)
Customer success manager
Dedicated CSM for the customer
Tooltip icon
Customer success manager
Dedicated CSM for the customer

Frequently asked questions, answered.

Still have more questions? Learn more in our docs or contact sales.

Does Fiddler monitor pre-deployment models for validation?

Yes, Fiddler monitors and explains training models for validation before they are deployed into production. 

Is the Fiddler pricing model consumption-based?

No, you will need to pre-commit to a monthly volume and baseline their usage for the length of an annual commitment.

Is the pricing for LLM different from ML models?

The pricing for LLMs and ML models is the same. The pricing is based on the data ingested (tokens). 

With LLMs, there are additional capabilities of embedding generation, and hallucination, safety, privacy and other scores for monitoring LLM metrics, which are additionally priced by data size as well.

I already use Fiddler. How can I use Fiddler for my LLM use cases?

You can purchase LLM capabilities as an add-on.

Contact your Fiddler Customer Success Manager to help you get onboarded with your LLM use cases.

How does Fiddler calculate a model explanation?

One model explanation is defined as fetching an explanation for a single prediction via the Fiddler Explain UI or via the run_explanation() API.

How does Fiddler calculate the Data Ingested metric?

Data Ingested metric is calculated simply as the size of the data you send to Fiddler in a given month. This includes the baseline datasets, model predictions, and model artifacts.

How do I determine the volume of data I’ll need?

To estimate the volume of data you will send, you can calculate the size of baseline data, model predictions traffic and size and model artifact size for each of your models and sum them up.

Does Fiddler monitor 100% data without sampling? Does Fiddler allow customers to sample their data? 

Fiddler is agnostic to sampling. You sample predictions and send it to Fiddler to monitor. Whether you send us 1% or 100% of your traffic, we will monitor both of them in the same way. However, costs of the latter will be much higher given data storage, compute and transfer costs.

What happens if I go over the committed number of models or data volume?

Fiddler will have internal limits set up based on your selection. Don’t worry if you go over. The Fiddler Customer Success team will work with you to find the best solution.

How can I upgrade to a plan that can scale my ML or LLM needs?

Contact us to upgrade to the Business or Premium plans. Continue to benefit from Fiddler’s advanced and feature-rich platform, and services to help you build a long-term responsible AI.

What is your data retention policy?

Customers can choose the length of their data retention.

What data does Fiddler collect?

Fiddler does not collect any data. All customer data is classified as confidential data and Fiddler backs up the customer’s encrypted data to ensure data is safe and secure. For more information, visit our security and compliance page.

Is Fiddler SOC 2 certified?

Fiddler is SOC 2 certified. The report is available, upon request, for review by existing customers and new prospects. As the information is confidential, we require a signed NDA to review the report.

Is Fiddler HIPAA compliant?

Fiddler is HIPAA compliant and can provide services to companies under HIPAA. For more information, visit our security and compliance page.

What is LLMOps?

Large language model operations (LLMOps) provides a standardized end-to-end workflow for training, tuning, deploying, and monitoring LLMs (open source or proprietary) to accelerate the deployment of generative AI models and applications. 

What is the difference between generative AI and LLM?

Large language models (LLMs) use deep learning algorithms to analyze massive amounts of language data and generate natural, coherent, and contextually appropriate text. Unlike predictive models, LLMs are trained using vast amounts of structured and unstructured data and parameters to generate desired outputs. LLMs are increasingly used in a variety of applications, including virtual assistants, content generation, code building, and more.

Generative AI is the category of artificial intelligence algorithms and models, including LLMs and foundation models, that can generate new content based on a set of structured and unstructured input data or parameters, including images, music, text, code, and more. Generative AI models typically use deep learning techniques to learn patterns and relationships in the input data in order to create new outputs to meet the desired criteria.