AI Explained: Metrics to Detect Hallucinations
April 25, 2024
10AM PT / 1PM ET
Registration is now closed. Please check back later for the recording.
Deploying LLM applications for real-world use cases requires a comprehensive workflow to ensure LLM applications generate high-quality and accurate content. Testing, fixing issues, and measuring impact are critical steps of the workflow to help LLM applications deliver value.
Pradeep Javangula, Chief AI Officer at RagaAI will discuss strategies and practical approaches organizations can follow to maintain high performing, correct, and safe LLM applications.
Watch this AI Explained to learn:
- Methodologies in choosing the right metrics to measure the health and performance of LLM applications
- Practical approaches to evaluating and monitoring hallucination metrics, and resolving issues
- Best practices for testing and the development of LLM applications
AI Explained is our AMA series featuring experts on the most pressing issues facing AI and ML teams.
Featured Speakers
Pradeep Javangula
Chief AI Officer
at
RagaAI
With over 25 years of expertise in AI and ML, Pradeep Javangula is a renowned industry expert in AI. Previous roles include VP of AI/ML at Workday and VP of Engineering at Adobe.
Joshua Rubin
Head of AI Science
at
Fiddler AI
Joshua has been a Fiddler for five years. During this time he developed a modular framework to extend explainability to complex model form-factors, such as those with multi-modal inputs. He previously applied deep learning to instrument calibration and signal processing problems in the biotech tools space after outgrowing a career as an experimental nuclear physicist.