At the end of March we held our Generative AI Meets Responsible AI summit. In case you missed it, here are the top 5 questions we received from attendees on issues related to responsible AI. You can check out all the recordings here!
1. How do you balance the effectiveness of AI systems and their bias?
Saad Ansari from Jasper AI said this, “There are several methods to address bias in AI systems. One approach is to prevent bias during the model training, selection, and fine-tuning processes. However, model bias can have various meanings and may manifest differently in different contexts. For instance, when asked for an image or story about a rich person, the output may predominantly feature white males. This can be mitigated during the model creation process. However, identifying all possible biases in every scenario can be challenging due to outlier situations. In some cases, biases only become evident after receiving user feedback, as was the case with an output that was perceived as racist. In such instances, user feedback is crucial in identifying and rectifying biases that were not initially anticipated. This information can then be integrated into the system to prevent similar issues from occurring in the future. Bias can be interpreted in many ways and has various applications, making it a complex issue to address comprehensively.
Our own data scientist, Amal Iyer, had this to say: “The built-in assumption in this question is that effectiveness and bias are competing objectives. There is little evidence to show that this is the case. Carefully benchmarking your model, monitoring it in production for model drift, having good hygiene on updating training data periodically etc., are ways to not just be effective but also snuff out bias.”
2. Can you share your thoughts on the bias present in language systems, such as associating men with programming and women with homemaking, and the potential risk of amplifying these biases?
Saad Ansari of Jasper AI had this to say, “ Certainly, let's start with the bad news and then move on to the good news. The unfortunate reality is that AI models often reflect the biases present in the data they are trained on, which is created by humans. This means that these models can inadvertently mirror our own biases, which is concerning.
However, the good news is that AI models are more controllable than human discourse. During the training process, feature engineering, and data selection, we can take measures to prevent biased behavior. Moreover, once we detect biases in the models, we have methods to eliminate or balance them. While recent models may not exhibit such biases, if we were to discover any, we would know how to address them effectively.
3. How can social media platforms balance between societal harm and the need for engagement?
Toni Morgan from TikTok shared these insights: “Our team's primary goal is to foster trust from a billion users in our technology and decision-making. We work within the trust and safety organization, ensuring that trust is the core of everything we do, while acknowledging that there's no perfect solution. With our team's efforts, we're gradually getting closer to understanding how to create a trustworthy space for people to create and thrive on the platform.
One way we're addressing this issue is by sharing our community principles for the first time, which helps bridge the gap of explainability. These principles guide our content decisions and demonstrate the inherent tensions our moderators face daily in keeping TikTok safe. Sharing these principles helps users understand that our approach isn't targeted against any specific group, but instead is based on a set of principles.
We're also working to create ‘clear boxes’ by launching our transparency center, which shares data on our enforcement efforts every quarter. This holds us accountable to our community and other stakeholders concerned about platform safety.
Lastly, we give users the opportunity to appeal decisions regarding their content or account. This transparency allows users to understand why we've taken action and provides an avenue for conversation with the platform.”
4. How do you handle data privacy issues in LLMs?
Staff Data Scientist, Amal Iyer answered, “ When it comes to training data, responsible teams are mindful of licensing associated with the content. For inference, API providers tend to mention if it would be used for training or evaluation in their Terms of Service (TOS). That said, this is still an evolving area and not all teams wield the same level of sensitivity to data licenses.”
5. Regulators are rolling out more compliance and regulations on AI. How should companies start or continue to build a responsible AI framework to follow regulations and minimize risks for their customers?
Miriam Vogel, Chair on the National AI Advisory Committee, answered that navigating the concept of trustworthiness in AI on a global scale can be challenging due to differing expectations. Nevertheless, we all share a common goal: ensuring that AI systems, which have become essential and enjoyable parts of our lives, are used as intended. We must prioritize fairness, safety and strive to create AI systems that benefit everyone without causing exclusion, discrimination, or harm. By promoting responsible AI practices, we can continue to enjoy the advantages of AI while maintaining a safe and inclusive environment for all.
Assign someone from the C-suite to be accountable for significant decisions or issues, so that everyone knows who to consult and who is responsible. Standardize the decision-making process across the organization to foster internal trust, which in turn builds public trust. While external AI regulations are essential, companies can implement internal measures to communicate the trustworthiness of their AI systems.
One such measure is thorough documentation as part of good AI hygiene. Document the tests performed, their frequency, and ensure this information is accessible throughout the AI system's lifespan. Conduct regular audits and establish a known testing schedule, including the testing criteria and any limitations.
In case a user identifies an unconsidered use case, they can address it themselves. This approach allows for the identification of underrepresented or overrepresented populations in AI medical systems and helps mitigate misrepresentation in the AI's success rate.
Fortunately, trustworthy AI exists in various areas of our ecosystem. Clear industry consensus or government regulations can provide guidelines for organizations to follow. Global initiatives, such as the voluntary, law-agnostic, and use case-agnostic NIST AI Risk Management Framework, offer guidance for implementing best practices based on input from stakeholders worldwide.
To ensure responsible and trustworthy AI, consider joining the EqualAI badge program. This initiative allows senior executives to collaborate with AI leaders and reach a consensus on best practices. Although the program is somewhat tongue-in-cheek, it has proven helpful for those seeking to navigate the complex landscape of responsible AI. Moreover, organizations like EqualAI and the World Economic Forum have published articles synthesizing best practices. By adhering to good AI hygiene principles, we can continue to advance responsible AI practices across the globe.”