What's New in the Fiddler AI Observability Platform for LLMs
October 26, 2023
10AM PT / 1PM ET
Registration is now closed. Please check back later for the recording.
We're thrilled to announce major product upgrades to the Fiddler AI Observability Platform for predictive and generative models. As more models and generative AI applications go into production, it is increasingly important to monitor, diagnose, and improve for the best AI outcomes.
Register for this demo-driven webinar to learn how to:
- Monitor models for unique use cases using custom metrics, and surgically diagnose model issues with enhanced root cause analysis capabilities
- Evaluate LLMs to prevent prompt injection attacks, and monitor safety metrics like toxicity and PII
- Gain qualitative insights on LLM prompt and response clusters via 3D UMAP
Can’t attend live? You should still register! Recordings will be available to all registrants after the event.
Featured Speakers
Karen He
Principal Product Marketing Manager
at
Fiddler AI
Karen He is a Principal Product Marketing Manager at Fiddler where she leads product marketing. She has experience in customer experience analytics, data and application integration, and data cloud storage. Prior to Fiddler, she worked at AWS, IBM, SnapLogic, and Tealeaf Technologies.
Sree Kamireddy
VP of Product
at
Fiddler AI
Sree leads the Product team at Fiddler. Sree brings a wealth of experience in scaling machine learning infrastructure and applying AI to tackle complex business challenges across a range of domains including Search, Ads, IoT, and Content Management.
Amal Iyer
Senior Staff AI Scientist
at
Fiddler AI
Amalendu (Amal) Iyer is a Sr. Staff Data Scientist at Fiddler where he is responsible for developing systems and algorithms to monitor, evaluate and explain ML models. He also leads the development of Fiddler Auditor, an open-source project to evaluate robustness and safety of Large Language models. Prior to joining Fiddler, Amal worked at HP Labs, where he led the research around Self-Supervised Learning techniques for improving data-efficiency of ML models and on Deep Reinforcement Learning. Before that at Qualcomm AI research, he was part of the team that developed the Snapdragon Neural Processing SDK and developed Speech Recognition models for Voice UI applications. Amal obtained his M.S. from University of Florida and B.S. from University of Mumbai in Electrical and Computer Engineering.