AI has incredible economic and societal value, but fully unlocking that value will require public trust in AI. If you’re looking for a framework to implement trustworthy AI, the Business Roundtable Roadmap for Responsible Artificial Intelligence is a great place to start. Business Roundtable is a nonprofit organization representing CEOs of leading companies, whose charter is to advance policies that strengthen and expand the US economy.
While every organization’s journey to Responsible AI will look different, Business Roundtable has identified 10 guiding principles:
1. Innovate with and for diversity.
Diversity is key to getting a balanced, comprehensive perspective on the development and use of AI at any organization. When assembling teams that work with AI — whether they’re involved in creating models, or in cross-functional governance and oversight — business leaders should look for individuals with a wide range of professional experience, subject matter expertise, and lived experience.
2. Mitigate the potential for unfair bias.
Bias can be introduced at many stages of the AI lifecycle. Safeguards should be put in place to ensure that AI doesn’t result in negative consequences for individuals due to characteristics like ethnicity or gender.
3. Design for and implement transparency, explainability, and interpretability.
Especially for AI systems that make impactful decisions (like approving loans or reviewing resumes), it’s important to explain the relationships between the model’s inputs and its outputs — the premise behind explainable AI. Different audiences — like implementers, end users, and regulators — will need tailored tools to help inspect and understand AI models.
4. Invest in a future-ready AI workforce.
A broad, diverse talent pipeline is needed to implement AI responsibly. Businesses should consider where new jobs may be created as a result of using AI systems and where existing roles might change, and make education, training, and opportunities in AI widely available.
5. Evaluate and monitor model fitness and impact.
AI models need well-defined goals and metrics, capturing both value and risk, so that performance can be assessed. Before release, models should be evaluated to verify that they’re fit for the use case and context. Live models need continuous model monitoring to identify any model drift, and must be adjusted for quality and robustness, as part of on-going model performance management.
6. Manage data collection and data use responsibly.
Fair and responsible AI begins with the data you use to train models, which should be varied, appropriate for use, and well-annotated. Human bias can be reflected in the data, and care should be taken to correct potential unfairness.
7. Design and deploy secure AI systems.
For models to be trustworthy, they should be secure from malicious actors, and any sensitive data used for model development should be protected.
8. Encourage a company-wide culture of Responsible AI.
Responsible AI requires openness and critical thinking about AI risk at all levels, from business leaders determining the values and framework around building AI, to model developers implementing AI according to the same framework.
9. Adapt existing governance structures to account for AI.
Teams like risk management, compliance, and business ethics need to start thinking about incorporating AI into their existing processes. Where appropriate, businesses should establish new AI-specific model governance and model risk management methods.
10. Operationalize AI governance throughout the whole organization.
Taking action to build Responsible AI will require AI governance with dedicated budget, personnel, and clear responsibilities outlined for transparency and accountability. In addition, all internal stakeholders should be educated on AI so they have a general understanding of the technology.
By putting these 10 principles into practice, organizations can build trust into AI systems and mitigate risks. Contact us to see how Fiddler can help you on your roadmap to responsible AI.