Today, AI impacts countless aspects of our day-to-day lives: from what news we consume and what ads we see, to how we apply for a job, get approved for a mortgage, and even receive a medical diagnosis. And yet only 28% of consumers say they trust AI systems in general.
At Fiddler, we started the Explainable AI (XAI) Summit to discuss this problem and explore how businesses can address the many ethical, operational, compliance, and reputational risks they face when implementing AI systems. Since starting the summit in 2018, it’s grown from 20 attendees to over 1,000. We’re extremely grateful to the community and the many experts and leaders in the space who have participated, sharing their strategies for implementing AI responsibly and ethically.
Our 4th XAI Summit a few months ago focused on MLOps, a highly relevant topic for any team looking to accelerate the deployment of ML models at scale. On our blog, we’re recapping some highlights from the summit, starting with our keynote presentation by Yoav Schlesinger, Director of Ethical AI Practice at Salesforce. Yoav explained why we’re at a critical moment for anyone building AI systems, and showed how organizations of all sizes can measure their progress towards a more responsible, explainable, and ethical future with AI.
AI is at a tipping point
Throughout history, new and promising innovations—from airplanes to pesticides—have experienced “tipping points” where society had a reckoning around the potential harms of these technologies, and arrived at a moment of awareness to create fundamental change.
Consider the auto industry. During the first few years of World War I, with few regulations and standards for drivers, roads, and pedestrians, more Americans were killed in auto accidents than American soldiers were killed in France. The industry finally began a transformation in the late 1960s, when the National Highway Traffic Safety Administration (NHTSA) and Transportation Safety Board (NTSB) were formed, and other reforms were put into place.
Is AI experiencing a similar moment? The headlines over the last few years would argue that it is. Amazon’s biased recruiting tool, Microsoft’s “racist” chatbot, Facebook’s issues with propagating misinformation, Google Maps routing motorists into wildfires—these are just a few of the most well-known examples. Just as with previous technologies, we have writers, activists, and consumers demanding safety and calling for change. The question is how will we respond, as a society and as leaders in our organizations and developers of AI systems.
Safe AI is a business imperative
As technology creators, we have a fundamental responsibility to society to ensure that the adoption of these technologies is safe. Of course, as a business, it’s natural to worry about costs and tradeoffs when implementing AI responsibly. But the data shows that it’s not a zero-sum equation—in fact, it’s the opposite.
Salesforce did a study of 2,400 consumers worldwide, and 86% said they would be more loyal to ethical companies, 69% said they would spend more with companies they regarded to be ethical, and 75% would not buy from an unethical company. It’s become clear that safe, ethical AI is critical to survival as a business.
How AI ethics evolves at an organization
How does a business develop its AI ethics practice? Yoav shared a four-stage maturity model created by Kathy Baxter at Salesforce.
Stage 1 - Ad Hoc. Within the company, individuals are identifying unintended consequences of AI and informally advocating for the need to consider fairness, accountability, and transparency. But these processes aren’t yet operationalized or scaled to create lasting change.
Stage 2 - Organized and Repeatable. Ethical principles and guidelines are agreed upon, and the company starts building a culture where ethical AI is everyone’s responsibility. Using explainability tooling to do bias assessment, then doing bias mitigation, and lastly doing post-launch assessment encourages feedback and enables a virtuous cycle of incorporating that feedback into future iterations of the models.
Stage 3 - Managed and Sustainable. As the practice matures, ethical considerations are baked in from the beginning of development through post-production monitoring. Auditing is put in place to understand the real-world impacts of AI on customers and society—because bias and fairness metrics in the lab are only an approximation of what actually happens in the wild.
Stage 4 - Optimized and Innovative. There are end-to-end inclusive design practices that combine ethical AI product and engineering development with new ethical features and the resolution of ethical debt. Ethical debt is even more costly than standard technical debt, because new training data may need to be identified, models retrained, or features removed that have been identified as harmful.
We don’t have the luxury of waiting
As Yoav put it, if you're not offering metaphorical seatbelts for your AI, you're behind the curve. If you’re offering seatbelts for your AI, but charging for them, you're also behind the curve. If you're offering seatbelts for your AI, and airbags, and other safety systems that are standard as part of what you're doing, you're on the right path.
How will you push forward the evolution of explainable and safe AI? Together we’re learning and understanding the risks and harms associated with the AI technologies and applications that we're building. The maturity model will change as our understanding develops, but it’s clear that we are at the tipping point where safe, explainable AI practices are no longer optional.
Yoav encouraged everyone to locate their organization on the maturity model and push their practices forward, to end up on the right side of history. That’s how we’ll ensure that the future for everyone on the AI road is safe and secure.
There was a lot more thought-provoking discussion (and charts, stats, and graphics) from Yoav’s keynote presentation that we didn’t have the space to share here. You can watch the full keynote above and view the complete playlist of talks and panels from our 4th Annual XAI Summit.