AI Explained: AI Safety and Alignment
February 29, 2024
10AM PT / 1PM ET
Registration is now closed. Please check back later for the recording.
Large-scale AI models trained on internet-scale datasets have ushered in a new era of technological capabilities, some of which now match or even exceed human ability. However, this progress emphasizes the importance of aligning AI with human values to ensure its safe and beneficial societal integration. In this talk, we will provide an overview of the alignment problem and highlight promising areas of research spanning scalable oversight, robustness and interpretability.
Watch this talk to learn:
- Scalable oversight: Developing methods for scalable AI oversight to keep decisions and actions aligned with human guidance.
- Robustness: Strengthening AI's robustness to manipulation and ensuring consistent performance in varied and unforeseen situations.
- Interpretability: Creating human-in-the-loop techniques for clear AI decision-making to enhance human understanding, trust, and management.
AI Explained is our webinar series featuring experts on the most pressing issues facing AI and ML teams.
Featured Speakers
Amal Iyer
Senior Staff AI Scientist
at
Fiddler AI
Amalendu (Amal) Iyer is a Sr. Staff Data Scientist at Fiddler where he is responsible for developing systems and algorithms to monitor, evaluate and explain ML models. He also leads the development of Fiddler Auditor, an open-source project to evaluate robustness and safety of Large Language models. Prior to joining Fiddler, Amal worked at HP Labs, where he led the research around Self-Supervised Learning techniques for improving data-efficiency of ML models and on Deep Reinforcement Learning. Before that at Qualcomm AI research, he was part of the team that developed the Snapdragon Neural Processing SDK and developed Speech Recognition models for Voice UI applications. Amal obtained his M.S. from University of Florida and B.S. from University of Mumbai in Electrical and Computer Engineering.
Joshua Rubin
Head of AI Science
at
Fiddler AI
Joshua has been a Fiddler for five years. During this time he developed a modular framework to extend explainability to complex model form-factors, such as those with multi-modal inputs. He previously applied deep learning to instrument calibration and signal processing problems in the biotech tools space after outgrowing a career as an experimental nuclear physicist.