Pricing plans
Drive Business Value with Responsible AI at Scale
For individual practitioners: Build ML and LLMOps with monitoring and analytics.
- Monitor model performance, drift, data integrity, and more
- Basic root cause analysis
- Communication and support essentials
- Security essentials
For teams: Align your AI solutions to business KPIs with advanced ML and LLM monitoring.
- Advanced model analytics
- Assess model fairness and bias
- Role based access control and SSO
- Named Customer Success Manager
- Engineering and Data Science services
For enterprises: Accelerate ML and LLMOps strategies with flexible SaaS or On-Premise deployment options.
- Cloud and On-Premise deployment options
- White-glove support and services to support tailored AI deployment requirements available
- Customized onboarding and solution success
- Dedicated communication channels
Reduce costs and increase efficiencies by tracking how all your AI deployments are performing.
Mitigate model performance issues before they impact your business.
Gain actionable insights from your models to make data-driven business decisions.
Pinpoint specific areas to optimize by drilling down on segments of interest with powerful and rich custom reports. Align the entire organization by connecting model insights to business outcomes.
Build trust into AI.
Make better business decisions with increased context and visibility into your model’s behavior and outcomes.
Build transparent, accountable, and ethical practices for your business with responsible AI.
Increase visibility in AI governance with continuous monitoring, while detecting and mitigating bias in datasets and models.
AI initiatives Security and compliance at the forefront of your AI initiatives.
Fiddler takes care of your data security with SOC 2 and HIPAA compliance.
Experience hands-on onboarding and enablement to launch models into production.
Receive white-glove support from designing and implementing your deployment strategy to successfully onboarding and configuring your models.
AI Engineering and Data Science services to supplement and optimize your deployment.
- AI Observability integration strategy and architecture
- Generative AI/LLM and ML model deployment framework
- Responsible AI and MRM framework design and best practices
Grounded in transparency. We keep it simple and you only pay for what you need.
No two models are the same. Why should their predictions cost the same? With data, you pay based on model inputs.
Select the number of models to be monitored and analyzed.
Choose the number of explanations you need to understand model predictions.
Gain full context on your models and understand model outcomes. Pay only for what you need to scale your ML and LLM initiatives.
Compare Plans
Deployment
Scale
Data
Core Capabilities
Monitoring
Explainability
Analytics
Fairness
Security and Compliance
Onboarding, Enablement, and Support
Frequently asked questions, answered.
Still have more questions? Learn more in our docs or contact sales.
Yes, Fiddler monitors and explains training models for validation before they are deployed into production.
No, you will need to pre-commit to a monthly volume and baseline their usage for the length of an annual commitment.
The pricing for LLMs and ML models is the same. The pricing is based on the data ingested (tokens).
With LLMs, there are additional capabilities of embedding generation, and hallucination, safety, privacy and other scores for monitoring LLM metrics, which are additionally priced by data size as well.
You can purchase LLM capabilities as an add-on.
Contact your Fiddler Customer Success Manager to help you get onboarded with your LLM use cases.
One model explanation is defined as fetching an explanation for a single prediction via the Fiddler Explain UI or via the run_explanation()
API.
Data Ingested metric is calculated simply as the size of the data you send to Fiddler in a given month. This includes the baseline datasets, model predictions, and model artifacts.
To estimate the volume of data you will send, you can calculate the size of baseline data, model predictions traffic and size and model artifact size for each of your models and sum them up.
Fiddler is agnostic to sampling. You sample predictions and send it to Fiddler to monitor. Whether you send us 1% or 100% of your traffic, we will monitor both of them in the same way. However, costs of the latter will be much higher given data storage, compute and transfer costs.
Fiddler will have internal limits set up based on your selection. Don’t worry if you go over. The Fiddler Customer Success team will work with you to find the best solution.
Contact us to upgrade to the Business or Premium plans. Continue to benefit from Fiddler’s advanced and feature-rich platform, and services to help you build a long-term responsible AI.
Customers can choose the length of their data retention.
Fiddler does not collect any data. All customer data is classified as confidential data and Fiddler backs up the customer’s encrypted data to ensure data is safe and secure. For more information, visit our security and compliance page.
Fiddler is SOC 2 certified. The report is available, upon request, for review by existing customers and new prospects. As the information is confidential, we require a signed NDA to review the report.
Fiddler is HIPAA compliant and can provide services to companies under HIPAA. For more information, visit our security and compliance page.
Large language model operations (LLMOps) provides a standardized end-to-end workflow for training, tuning, deploying, and monitoring LLMs (open source or proprietary) to accelerate the deployment of generative AI models and applications.
Large language models (LLMs) use deep learning algorithms to analyze massive amounts of language data and generate natural, coherent, and contextually appropriate text. Unlike predictive models, LLMs are trained using vast amounts of structured and unstructured data and parameters to generate desired outputs. LLMs are increasingly used in a variety of applications, including virtual assistants, content generation, code building, and more.
Generative AI is the category of artificial intelligence algorithms and models, including LLMs and foundation models, that can generate new content based on a set of structured and unstructured input data or parameters, including images, music, text, code, and more. Generative AI models typically use deep learning techniques to learn patterns and relationships in the input data in order to create new outputs to meet the desired criteria.