Fiddler is hosting our third annual Explainable AI Summit on October 21st, bringing together industry leaders, researchers, and Responsible AI experts to discuss the future of Explainable AI. This year’s conference will be entirely virtual, and we’re using the opportunity to take the conference global, with speakers zooming in from across the country and the globe.
Before the Summit, we asked speakers to fill us in on how they got where they are today, their views on the key challenges and opportunities in AI usage today, and how to ensure AI is deployed in a responsible, trustworthy manner. In our first spotlight, we’re covering the topic of Responsible AI, what it means, and why it’s so important.
AI is a young -- and exceedingly complex -- field, and so it’s natural that there is yet to be a single agreed-upon definition of what it means to create and deploy AI responsibly. But AI is increasingly being applied to business-critical use cases across industries - and as it goes from a technology on the fringe to mainstream, the importance of deploying AI responsibly has reached a crescendo. Businesses, consumers, and regulators are calling for more transparency and accountability in AI solutions, and in response, many organizations are starting to put guiding principles around responsible and ethical AI practices, and regulations, governance and oversight are increasing.
Exactly what is Responsible AI?
“Put simply, responsible AI refers to the set of tools and processes that an organization can deploy to ensure the trustworthy design and use of AI systems,” says Lofred Madzou, Artificial Intelligence Project lead at the World Economic Forum. Victor Storchan, Senior Machine Learning Engineer at JPMorgan Chase & Co. adds, “Responsible AI is a framework to develop AI. It is framed by different actors from everyone’s accountability to regulation and law. It is strongly correlated with cultural values of societies and as such it is not uniform. It contains notions like fairness, transparency, privacy or accountability that are derived from political philosophy but also [elements like] Sustainable Development Goals derived from Paris agreement (like Green AI).” Madzou echoes the importance of acknowledging societal and cultural differences: “This opens up another question,” he says, “what does trustworthy actually mean? This can differ depending on the context and the cultural background within which AI operates. Yet, there is a growing acknowledgement, in the West at least, that an AI system is deemed trustworthy if its behavior is consistent with our laws. From this perspective, one can easily grasp why responsible AI is so critical for the future of the industry. If we don’t manage to build trustworthy systems, in the long run, we’re going to limit the use of AI to low-risk applications. This would be a great loss for businesses and consumers considering the immense potential of AI for economic growth.”
Merve Hickok, AI Ethicist and Founder of AIethicist.org, adds that we must not remove the human element from responsibility in building AI systems. “I would like to rephrase [Responsible AI as] “responsible development and deployment of AI,”’ shes says, “It is important that we do not attribute traits like ‘responsible’ or ‘trustworthy’ to algorithms.”
Why is Responsible AI important?
Patrick Hall, visiting faculty at the George Washington University, Principal Scientist at bnh.ai, and Advisor to H2O.ai, uses a simple analogy to answer this question: “Responsible AI is important because AI is a powerful technology -- just like a jetliner. Who wants to ride on a jetliner that wasn't made, or isn't operated, responsibly? I think all commercial applications of AI should just be Responsible AI.”
To bring it back to AI specifically, Michelle Allade, Head of Bank Model Risk Management at Alliance Data Card Services, explains: “With great power comes great responsibility. AI brings tremendous benefits when well implemented, however the potential consequences when things turn sour must carefully be understood and/or mitigated...I believe we should be asking the question: “Is this the right thing to do?” continuously when leveraging AI.”
But perhaps the most obvious reason for the importance of Responsible AI? AI impacts real lives, making it imperative that the system is governable and auditable and we keep humans in the loop. Kenny Daniel, co-founder and CTO at Algorithmia says, “I want AI to succeed. I don’t want us to end up in another AI winter, and I think trust in AI is critical for that success. In areas that could be sensitive, I think it’s important that you have ways of controlling, guiding, or explaining AI so that we can be sure it has more benefits than downsides. And so responsible AI can go a couple different directions. This includes explainability and understandability, but also more. More generally, finding ways to keep humans in the loop is important. If there’s a human in the loop, then there’s a person responsible—it’s not just an abdication of responsibility to a machine.”
This is just the tip of the iceberg - stay tuned for the next Speaker Spotlight, and join us for the Explainable AI Summit on October 21 to hear more from our full lineup of speakers.