For the latest installment of the Responsible AI Podcast, we were excited to speak with Lofred Madzou, Project Lead for AI at the World Economic Forum, managing AI governance projects around the world. Lofred’s previous work includes serving as a policy officer at the French Digital Council, specializing in AI regulation. He is also a Research Associate at the Oxford Internet Institute, where he focuses broadly on AI auditing and philosophy.
Lofred’s optimism and pragmatism are infectious, and he shared some great stories about implementing responsible AI in some very sensitive domains, such as facial recognition at airports.
When we asked Lofred to explain the top three things that teams should keep in mind when implementing AI, his response was:
- Context is everything.
- It’s everyone’s responsibility.
- Before bringing AI to market, we need to come to terms with what AI can do—and what it cannot do.
Below, we’ll summarize this discussion around these three topics, and take a look at Lofred’s predictions for the future of responsible AI.
1. Context is everything
Every use case has its own goals and constraints. Although there might be dozens of frameworks for ethical AI, each with its own merits, Lofred explained that it’s impossible to find a universal set of rules. “I want to focus on responsible AI as a method,” Lofred said, “Rather than having specific principles or requirements—because those are context-dependent.” For example, using facial recognition for passengers boarding a plane will need a different set of principles and challenges from using AI for law enforcement.
Because there’s no one-size-fits-all solution, gathering context is the first step when implementing responsible AI. You have to define what responsible AI means for your use case. Even though it sounds simple, this step is critical.
Case study: Responsible AI for facial recognition at airports
Lofred walked through a use case from his work at the WEF, where he collaborated with organizations, governments, and other stakeholders involved in the use of facial recognition technology at airports, and more broadly for accessing train stations, stadiums, and other buildings. Lofred’s team designed a framework that specified what a proper audit and data governance policy would look like. For example, there should be requirements regarding how to collect consent, and a good alternative for passengers who don’t want to use the technology. Data retention time needs to be transparent, and data collected for one purpose shouldn’t be used for something else without permission.
How did Lofred’s team come up with this framework? “The first step was to define what ‘responsible use’ means for facial recognition technology in this context,” he said. To do that, “it starts with building the right community of stakeholders.” In this case, that meant that airports, tech companies, passengers, activists, and regulators all needed to come together with representatives who could agree on what “responsible AI” would mean for this specific scenario. While this may be a lot of work, it’s an important step. “Usually these conversations are internal to companies,” Lofred said, “and you don’t have the ability to capture input from people who might be impacted.”
2. It’s everyone’s responsibility
“The very nature of running systems and machine learning creates a set of challenges that are transversal,” Lofred said. To be able to manage large AI systems, there can’t just be one person, or even one team, in charge of responsible AI. As a first step, companies should build an internal task force that brings the right people into the room to define the responsible AI requirements and make sure that there are champions across the business. Responsible AI requires what Lofred described as “a coalition of willing actors.” There can’t be misalignments, or you won’t make progress.
When building what Lofred terms the “infrastructure of collaboration,” teams should keep a few concrete tips in mind. First, they should make sure that frameworks for responsible AI are tied to core internal processes, e.g. product performance reviews. They should also focus on bringing the risk and compliance specialists much closer to the product teams. This will help with overcoming what Lofred calls the “translation gap.” Broad legal requirements—e.g. don’t discriminate—must be turned into concrete design decisions for a specific use case.
“The next step is building organizational capabilities,” Lofred explained. “It’s about raising awareness… Risks, because of the nature of machine learning systems, are going to affect various business functions. What you want to make sure is that you train everyone in responsible AI. Not just a sense of what’s legal—but a broader awareness of what are the applications we have at the company, how they work, what can go wrong, and investing in training across the organization.”
3. Keep in mind what AI can do—and what it cannot
As Lofred explained, “There is sometimes magical thinking about what AI can do.” Before even discussing responsible AI frameworks, teams need to come to terms with the fact that AI isn’t a “silver bullet.” Lofred believes that many companies fail to move “past the lab” with AI and really scale their applications because they haven’t narrowed their focus, and they are operating under a misunderstanding of the genuine capabilities and limitations of AI. What Lofred called “bad use”—as compared to an inherent problem with AI as a technology—contributes to many of the challenges with responsible AI.
Lofred gave the example of using AI to detect fraud in social benefits applications. After working with people on the ground, e.g. social workers, to get a sense of what the reality was, it was determined that this process shouldn’t be automated. Given the risks and complexities, AI wasn’t the right fit. Over time, Lofred predicts, “we’re going to get a better sense of the limitations of AI systems so we’ll have a better use of it.”
Future predictions
We like asking our Responsible AI Podcast guests to guess how the industry will evolve over the next few years. Here are a few points that Lofred mentioned:
- “Only responsible AI companies will survive.” Many of the reckless actors will be pushed out of the market, Lofred believes—and not just because of regulation, but because consumers will demand more trustworthy systems.
- “Regulation is coming.” And coming soon—potentially in a matter of weeks or months in the EU.
- “Responsible AI will become cybersecurity on steroids.” After all, 20-25 years ago no one was really paying attention to cybersecurity, and now every software company takes it as a given requirement. Lofred sees the same thing happening with responsible AI, on a much faster timeframe and in a way that truly penetrates every business function.
Finally, for more of Lofred’s insights, we highly recommend reading the guide to scaling responsible AI that he co-wrote with Danny Lange of Unity Technologies.
For previous episodes, please visit our Resource Hub.
If you have any questions or would like to nominate a guest, please contact us.