AI Explained: AI Safety in Generative AI
June 27, 2023
10AM PT / 1PM ET
Registration is now closed. Please check back later for the recording.
AI has the potential to improve humanity’s quality of life and day-to-day decisions. However, these advancements come with their own challenges that can cause harm. Peter Norvig, Distinguished Education Fellow at Stanford’s Human-Centered Artificial Intelligence Institute, will explore how organizations can preserve human control to ensure transparent and equitable AI.
Watch this AI Explained to learn:
- Human-in-the-loop practices for generative applications
- Considerations for AI safety
- Best practices to prevent and minimize risks and harm
AI Explained is our AMA series featuring experts on the most pressing issues facing AI and ML teams.
Featured Speakers
Peter Norvig
Distinguished Education Fellow
at
Stanford’s Human-Centered Artificial Intelligence Institute
Peter Norvig is a Fellow at Stanford's Human-Centered AI Institute and a researcher at Google Inc; previously he directed Google's core search algorithms group and Google's Research group. He is the co-author of Artificial Intelligence: A Modern Approach, the leading textbook in the field, and the co-teacher of an Artificial Intelligence class with 160,000 student signups. He is a fellow of the AAAI, ACM, California Academy of Science and American Academy of Arts & Sciences.
Krishna Gade
Founder and CEO
at
Fiddler AI
Krishna Gade is the founder and CEO of Fiddler AI, an enterprise AI Observability startup, which focuses on monitoring, explainability, fairness, and responsible AI governance for predictive and generative models. AI Observability is a vital building block and provides visibility and transparency to the entire enterprise AI application stack. An entrepreneur and engineering leader with strong technical experience in creating scalable platforms and delightful consumer products, Krishna previously held senior engineering leadership roles at Facebook, Pinterest, Twitter, and Microsoft.