GRC in Generative AI with Navrina Singh
In this episode of AI Explained, we are joined by Navrina Singh, Founder and CEO at Credo AI.
We will discuss the comprehensive need for AI governance beyond regulated industries, the core principles of responsible AI, and the importance of AI governance in accelerating business innovation. The conversation also covers the challenges companies face when implementing responsible AI practices and dives into the latest regulations like the EU AI Act and state-specific laws in the U.S.
[00:00:00] Krishna Gade: Welcome to another edition of AI Explained. I'm your host, uh, Krishna Gade, Founder CEO of Fiddler. Uh, we are here to talk about, uh, GRC or governance, risk management and compliance for generative ai.
[00:00:20] Krishna Gade: uh, had an amazing guest, uh, luminary in this space. Navrina Singh, um, she's the founder, CEO of Credo AI, a governance SaaS platform, empowering enterprises to deliver responsible AI.
[00:00:34] Krishna Gade: And Navrina has been a technology leader with a lot of experience and leadership roles at Microsoft, Qualcomm. She was also the member of US Department of Commerce, National Artificial Intelligence Advisory Committee, NAIAC.
[00:00:48] Navrina Singh: And uh, she's also been very much involved in some of the regulation that has been drafted around the Biden Executive Order, uh, last year. Um, with that, I'd like to welcome NavrinaThank you so much for having me, Krishna.
[00:01:04] Krishna Gade: Navrina, awesome. Great to, great to see you. Uh, it's, it's amazing to see how much you have, uh, driven for the community. And thank you so much for all the efforts, you know, in standing up these, uh, AI governance, uh, uh, regulatory drafts, and also in general, evangelizing the need for responsible AI, uh, within the community.
[00:01:25] Krishna Gade: Okay.
[00:01:26] Krishna Gade: So, uh, let me start with like something, something like, is AI governance. Uh, only needed when you're regulated, you know, there is like this notion that, oh, let me only think about governance if I am, if, if I'm in a regulated industry. Can you talk about that?
[00:01:45] Navrina Singh: Well, Krishna, I think we are starting with the myth busters, right?
[00:01:48] Navrina Singh: I think there is, uh, Unfortunately, myth that only if you are in a regulated or regulated sector, do you really need to think about governing your very important assets, which are driven by artificial intelligence. So that is not the case. What we are finding, especially as we are in this new age of generative AI, is that organizations really need to step back and think holistically about how are they going to not only develop, deploy, manage artificial intelligence with rigorous oversight and accountability, because one, that's going to help them build trust in this age of AI.
[00:02:28] Navrina Singh: And then we are seeing day in, day out, that in this age of AI, how critical Trust is and it's becoming a very important currency to be able to transact in this very fast moving space. So what we are finding is responsible AI becomes that mechanism by which organizations can really have that continuous oversight across their artificial intelligence, which basically then results in getting The ability to acquire customers faster, ability to retain customers longer, ability to adopt new kind of GenAI capabilities faster, so that you can actually be on the front end of innovation.
[00:03:09] Navrina Singh: So, not because of regulation, but because of a core competitive advantage that responsible AI is bringing to enterprises. We are seeing this massive shift in this ecosystem.
[00:03:23] Krishna Gade: Awesome. So, I mean, uh, there are multiple terms that get thrown in our industry, right? Every six months, there's like a new thing.
[00:03:31] Krishna Gade: These days, I keep fighting, you know, what's the difference between generative AI and traditional AI, and people think that they're very different things.
[00:03:38] Krishna Gade: So, let's talk about responsible AI. What do you see are the core principles that define responsible AI? You know, why are these principles essential for, you know, companies adopting AI today?
[00:03:51] Navrina Singh: Yeah, uh, you know, I couldn't agree more with you, Krishna. There's so much conflation in just overall artificial intelligence industry. But also, what is responsible AI? What's AI governance? What's GRC for AI? What's AIOps? What's LLMOps? So let's, let's try and dissect this for the audience. So responsible AI is really this core practice of making sure that enterprises have that continuous oversight and accountability to develop their AI systems, to put them in production, and to continuously manage them with the right governance guardrails.
[00:04:33] Navrina Singh: And I think just to talk a little bit about principles, the principles that enterprises end up adopting really depends upon their objectives. So, as an example, you know, we work across Global 2000 in financial services, healthcare, pharma, government, across all those different organizations, they really care about different objectives.
[00:04:57] Navrina Singh: And so, as an example, for some of them, reliability of having these AI systems deployed in situational awareness in defense applications is really critical. Whereas for others who might be operating, let's say, in healthcare and education, fairness is pivotal. For others, as you can imagine, transparency, of course, across the entire supply chain might be critical.
[00:05:22] Navrina Singh: So I, you know, I really think that responsible AI principles are really getting grounded in reliability safety, robustness, transparency, fairness, compliance, but more importantly , human centricity. But a lot of that depends upon what are your business objectives.
[00:05:42] Krishna Gade: Got it, got it.
[00:05:44] Krishna Gade: And so, uh, when companies start implementing Responsible AI, right, uh, you know, you may have helped a lot of companies do this.
[00:05:51] Krishna Gade: What are some of the common challenges that they face, you know, getting started and how they can effectively address these challenges?
[00:05:59] Navrina Singh: Yeah. And I think Krishna, like one of the things I really want maybe this audience to understand is the why and the how. So responsibility as the outcome, you are making sure that the system that you are putting out meet the objectives that your businesses might have, whether those are compliance objectives, whether those are reliability objectives, but how you get to those outcomes is basically AI governance.
[00:06:26] Navrina Singh: And AI governance is a comprehensive set of thinking through the policies, the frameworks, the tooling that basically makes Responsible AI a reality. So having said that and setting that context, I also really just want to dive a little bit deeper into, um, you know, where, how can they get started or what are the challenges they are running into?
[00:06:53] Navrina Singh: Now, first and foremost, uh, really aligning on, are you actually using artificial intelligence? And if yes, what kind? Because many of the companies we work with are still using old statistical methods, which might be like in their, you know, predictive ML space. What we are finding more and more is the excitement around using large language models, foundational models, to actually bring in more generative AI capabilities.
[00:07:24] Navrina Singh: So really having, I think, a good grasp of where AI is within your organization, using an AI repository is the first starting point.
[00:07:33] Navrina Singh: The second starting point is again, defining and aligning on what does good look like for your organization. So this is where, you know, one of the, I would say bad reps that Responsible AI has received is that it's not an enterprise priority because it's really soft.
[00:07:49] Navrina Singh: And I, I really think that's a really poor perspective that has been put out in the market. Because responsible AI, and especially its governance, is this enterprise requirement that you need to have to make sure that you're protecting your business, but also you're bringing in new AI innovations faster. So really aligning on what good looks like and what you're going to measure is the third, is a second core thing.
[00:08:14] Navrina Singh: And then the third thing is, once you've aligned on what really that looks, good looks like, is really defining your technical stack. And this is where Krishna maybe. We should dive a little bit deeper into again, the conflation that's happening. Because it's not just having the ops tools, the LLMOps tools, observability tools, or the GRC tools.
[00:08:36] Krishna Gade: Yeah.
[00:08:36] Navrina Singh: It is holistically thinking about that full stack.
[00:08:41] Krishna Gade: Yeah, so that's a good segue, right? So, you know, on one side, AI is being built by the geeks, you know, by the, by the engineers, by, you know, people who can, I mean, some of these, like, you know, foundation models, some of the work that is going into that is very, very technical, right?
[00:08:55] Krishna Gade: So they're building these, uh, you know, generative AI applications, fine-tuning, all of that. On the other side, this whole Responsible AI Governance feels like, as you said, you know, it's, uh, it's this subject matter, you know, that may not touch the technology. So what is the gap? What is the, how do you bridge this gap?
[00:09:13] Krishna Gade: You know, where do you see, like, this whole thing come together? You know, where do you see the ops, observability, or governance meeting together, uh, in, in an AI stack? And how does it help the entire company, um, in an enterprise organization?
[00:09:24] Navrina Singh: Yeah, and I might take a little bit longer to get to your answer, but let me explain two dimensions.
[00:09:31] Navrina Singh: The first dimension is the AI value chain, what's really happening in generative ai. And then the second dimension is the AI tech stack. So in terms of the AI value chain, what's happening right now with generative AI is have a set of core foundational model builders, uh, who obviously are pushing limit of compute and, and, and, and new capabilities.
[00:09:58] Navrina Singh: You can think about them as Anthropics of the world, Coheres of the world, OpenAIs of the world. And then those foundation model providers in some cases might be feeding into GenAI application developers. Now these could be applications in marketing and customer support, et cetera, where they're taking those foundation models, maybe fine-tuning it for their applications and making those available very, you know, and now we've started to see that these application developers have started to invest a little bit in building their own, um, I would say foundation models or, or LLM capabilities.
[00:10:33] Navrina Singh: And then these GenAI application developers sell into enterprise customers. And so these enterprise customers are, are, you know, um, not only like buying and using them, but in many cases might be repackaging it and selling it to the end user. So when you think about this entire AI sort of value chain, from the foundation model developers to the application developers to obviously open-source to enterprise user, and then to the end user, that one dimension is really critical to understand.
[00:11:06] Navrina Singh: But as you can imagine in that dimension, having the oversight and understanding of what kind of potential, uh, oversight mechanism, transparency mechanisms, and risk mechanisms need to be introduced. So, for example, at the foundation model level provider, we really want to know the outcomes of red teaming exercises.
[00:11:28] Navrina Singh: How have you actually, you know, done evaluation of Sonnet, Claude Models, etc. to make sure that you not only have an understanding of risk, but also the risk mitigation. Then as you move into the GenAI application developer, you need to understand the context. Okay, if you're taking a LLM and applying that to a customer support, versus if you're applying that to a search functionality, what is the context dependent risk within that? And then you go down the value chain. So that, that one dimension is important to understand.
[00:12:02] Navrina Singh: And then the second dimension to understand is, as, as you mentioned, a lot of the, I would say, core operational capabilities are happening at the ops layer. Now the LLMOps and the AIOps, the entire CICD pipeline, which generally is managed by a technical stakeholder.
[00:12:20] Navrina Singh: And then on the flip side, when you think about the business user who has the business context, they might be using some of the more traditional GRC tools, but when you think about that, Krishna, what ends up happening is there's a massive last mile problem. And the last mile problem between the ops ecosystem and the business ecosystem is what we have defined as AI governance and that AI governance category basically requires three core skills or three core problems.
[00:12:51] Navrina Singh: One is aligning. When you as a business are using artificial intelligence for making some claim processing decisions, what do you need to really measure effectively in your technical stack, in your business processes, and you as an organization have aligned to it. So the AI alignment problem is the top of mind. And you as a business need to align to it.
[00:13:17] Navrina Singh: The second is, how do you, once you've aligned on what you're going to be measuring, how do you go and effectively gather evidence from your tech stack, your op stack, but also your business processes to make sure within what you've aligned on, you actually are doing those things, right?
[00:13:34] Navrina Singh: And then the third is once you collect that evidence, how do you feed that into an ongoing risk and compliance engine so that continuously, whether you're in development or you're in production, you are continuously making sure you're still aligned. And anytime you go out of alignment, you basically flag that, right?
[00:13:53] Navrina Singh: That is what a good governance posture looks like, where you're continuously checking for these checks and balances, and it's honestly, you know, it sounds complex, but it is the hygiene factor that is needed to make AI actually work and reduce the AI incidents.
[00:14:11] Krishna Gade: Now that's a, that's a great, uh, great three step process and align on what you want to measure and then find out evidence to measure. And then, and the third one was actually measure or like, was it like react to when
[00:14:24] Navrina Singh: It's ongoing risk
[00:14:26] Krishna Gade: It's ongoing measurements.
[00:14:27] Navrina Singh: Yeah. Ongoing risk and compliance management, obviously making a very strong case for AI incidents management as well.
[00:14:34] Krishna Gade: So let's talk about the alignment thing. That's very interesting, right?
[00:14:37] Krishna Gade: So as you said in the beginning, every industry's concerns are different. You know, healthcare maybe cares about fairness more than maybe, you know, let's say other industries that might care more about robustness or reliability. Uh, what do you do, um, you know, maybe now you can sort of talk about your business a little bit, you know, when you're, when you sort of work with your customers from Credo side, how do you help customers to align on this thing and figure out what to measure, how do you help them?
[00:15:04] Navrina Singh: Yeah, so at Credo AI, we have built a software which basically is that single pane of governance that integrates into whatever your technical stack is. So, you know, across your entire CICD, as well as your business processes. And in the AI alignment engine, which we have within our system, we are basically doing a couple of things.
[00:15:31] Navrina Singh: We are first codifying all the industry best practices, all the AI standards like NIST AI Risk Management, ISO 42001, all the regulations that exist or are emerging, whether that EU AI Act or whether it's New York City Law No. 144. And then we also codify all the company policies. So this is really critical because our alignment engine is powered by an enterprise's ability to also feed in what they care about for that use case.
[00:16:05] Navrina Singh: So to make it a little bit more concrete, um, one of the largest, uh, financial services are our customer. They've been our customer for past four years. They have a lot of predictive ML and GenAI use cases. As you can imagine, they have multiple fraud and risk scoring use cases. Now within fraud, they are not caring about just what is happening in the, the, the regulatory ecosystem, but they have very high thresholds of what they want to measure and how they want to measure, especially around making sure that the fraud is not getting triggered for certain protected class, uh, demographics, and then being very thoughtful about what they're going to be measuring there.
[00:16:45] Navrina Singh: So as you can imagine, having the ability as an organization to align on that, I'm going to be measuring desperate impact ratio as an example for certain protected attributes. And in this case, it could be a zip code, it could be a name, and then really feeding that within, um, within this engine is becomes really critical.
[00:17:06] Navrina Singh: So I think Krishna, what we are finding is that that alignment, uh, really depends upon use case context. So Credo AI platform does not govern at model or dataset level. We govern at the use case level, which basically is an understanding around what kind of application are you using AI for, where is it going to be deployed, which part of the world is it going to be deployed in, who is it impacting, and within that context, that's how we measure and define the alignment for that particular use case.
[00:17:45] Krishna Gade: Well, that's fascinating, right? Uh, you know, we often from, you know, from an observability point of view, we run into customers that claim that, hey, you know, I'm an insurance company, Colorado suddenly started, uh, regulations for insurance, you know, you know, how would you, how would we sort of, uh, make sure that we are compliant against that.
[00:18:03] Krishna Gade: And we're, I know we're coming back to regulations, but that's like another form of alignment, right? You know, because it, regulations can come in. So would, would like your platform, that, that, have building blocks, so say that if it's, let's say if, if I'm an insurance company, you have to be mindful of Colorado, California, whatever, regulations, and this is the, this is the things that you need to measure. Do you reckon? What needs to be measured or how, how do, maybe you can get a little bit more into the details.
[00:18:32] Navrina Singh: Absolutely. So Colorado is a very interesting example. Colorado State Bill 169 is, I think, what you might be alluding to. Uh, basically what that requires for the insurance providers is that you need to be doing an impact assessment, and then it gets very prescriptive about what you need to be measuring within that.
[00:18:50] Navrina Singh: So in this instance, Credo AI has already coded codified, that particular piece of regulation. If you're an insurance provider, you will buy the Credo AI platform. And if your end objective is just compliance, Credo AI, once you've registered your use case within Credo AI, where we have the ability for you to auto register, if we are connected into your model repositories, or if you don't want to connect, uh, give us the ability to connect into your tech stack. In that case, you have to manually register the AI use case.
[00:19:23] Navrina Singh: But once we've done that, um, you know, registration of the use case within Credo AI, Credo AI's, um, smart engine basically suggests, hey, given that this particular use case we think is a high or a critical risk application in insurance, and you are deploying this in Colorado, you really need to be adherent to Colorado SB 169 as an example, and here are the steps, methodical way that you will go through that compliance journey.
[00:19:54] Navrina Singh: But as I mentioned, that compliance journey, um, basically requires you having an understanding of what's happening at your models, your datasets, your application level, but it also needs to have an understanding of what is happening on the business level. And this is where the last mile problem really comes in.
[00:20:14] Navrina Singh: Um, one thing I do want to highlight, uh, you know, like when you think about Fiddler and, and observability tools, they do a great job for technical stakeholders. What we are finding in AI governance is it's a multi stakeholder problem because you're bringing in risk and compliance and policy experts to basically understand and interact with the outputs of an AI system coming from very deep technical expertise.
[00:20:42] Navrina Singh: So Credo AI platform also does that translation to really help technical stakeholder understand the requirements from a business perspective and the business stakeholders make sense of the technical outputs that might be coming from your CICD pipeline and from your ops layer.
[00:21:02] Krishna Gade: Yep. So basically, you know, just to summarize it, you, you have this alignment platform that can help customers define metrics, recommend metrics, you know, based on regulation or their own policy.
[00:21:12] Krishna Gade: And then you can integrate with like ops platforms like Fiddler to collect the evidence and translate it. So that, you know, the lesser technical users or the business and compliance users can make sense out of it.
[00:21:23] Krishna Gade: And then obviously continuously, you know, build this oversight, not just once. So that that's kind of the three step process.
[00:21:30] Navrina Singh: Exactly. So it's alignment, and then it's a connection into ops platforms like Fiddler to really make sense of what the evidence is, and then risk management, and compliance on an ongoing basis to make sure that your holistic governance of that AI use case is managed across the entire lifecycle.
[00:21:51] Krishna Gade: Awesome, that's great. Wonderful. So now Navrina, I mean, what have you seen change in the AI governance landscape?
[00:21:59] Krishna Gade: Pre and post generative AI, you know. Now, these days, every company wants to drink the Kool Aid of generative AI, you know. Like, how is the governance, like, what's like the, has the need for governance increased or, you know, what is, what have you seen?
[00:22:13] Navrina Singh: Yeah, I would say the past two years, Krishna, have drastically shifted how governance previously was seen as maybe a speed bump to AI innovation. And now we are seeing the flip side where it is literally acting as a launch pad for AI innovation. So let me explain why that difference, right? So one is enterprises now are depending a lot on third party GenAI systems, right?
[00:22:40] Navrina Singh: With whatever tooling you're bringing from outside that might help with your employee productivity, etc. And so what we are finding is as you're depending a lot on these third party systems, you certainly as an enterprise are like, Oh, am I introducing more risk to my organization? Am I actually partnering with the right vendor? How can I trust them? Et cetera.
[00:23:02] Navrina Singh: So what we are finding is governance now has become not only a requirement and a competitive advantage that enables enterprises to adopt AI very quickly. So most of our customers, they are not spending, you know, years to roll out AI. GenAI systems in production, they can actually do that in a matter of weeks, because with our system, now they're going in with eyes wide open to adopt these third party GenAI systems.
[00:23:29] Navrina Singh: So I would say that generative AI really has shifted the narrative for governance as a, not only an enabler to AI adoption, but on the other side, Governance also has a competitive advantage.
[00:23:42] Navrina Singh: So let me give you an example. What we are finding is organizations and our customers, whether it is, you know, MasterCard, Northrop Grumman, Booz Allen, these are like some of our massive organizations that use Credo AI, they are really finding that governance actually unlocks a lot of trust and brand resiliency for them, because they have not only understood the risk, they have gone through that rigor of oversight that are needed for artificial intelligence.
[00:24:13] Navrina Singh: And so we are seeing, one, how AI governance is now becoming this launchpad to adopt AI very fast. But secondly, they are able to innovate with AI faster, build out trusted products, put it out in the market. Capture more customers, retain more customers, shorten their procurement cycle. So the ROI of governance has just accelerated, uh, within our customer base.
[00:24:37] Krishna Gade: Awesome. Yeah, so it seems like the risks around generative AI are higher than, than, than predictive AI because of all the things that you just mentioned, and they have, but the desire to adopt it is also higher. So in order to adopt it safely, you know, AI governance is becoming an enabler for these companies.
[00:24:56] Krishna Gade: So that's wonderful.
[00:24:57] Navrina Singh: Yep. Absolutely.
[00:24:59] Krishna Gade: So another thing that I'm noticing, and I'm sure like you probably hinted in this, is like these days an enterprise is buying as much as AI as, as it is building, you know. Previously, maybe in the predictive world, a lot of the machine learning models were trained in house, right? Like on your datasets.
[00:25:18] Krishna Gade: These days, a lot of AI is being bought either through foundation models or even through embedded AI products. You know, you might be buying the next CRM product or the next, uh, you know, call center, uh, you know, transcribing product where everything is coming up with embedded generative AI.
[00:25:33] Krishna Gade: How do you, how are you handling these things? Where is governance playing, uh, you know, part here?
[00:25:39] Navrina Singh: Uh, Krishna in everything. So I think you just articulated one of the biggest challenges enterprises are facing is because of AI's proliferation in pretty much every business process and every application that they're using, the risk surface area has also, you know, uh, increased equally.
[00:25:58] Navrina Singh: And so now, the question that the enterprises are asking themselves, whether it is AI use or shadow AI use, in both the scenarios, scenarios, they are asking themselves, How do we not only ensure that we have invested in the right tooling, which are all powered by artificial intelligence, but also we are protecting, and not only our brand, but also creating that trust metric.
[00:26:21] Navrina Singh: So where does governance play in? So governance plays in, I would say, in two core areas.
[00:26:27] Navrina Singh: One is if you are buying a third party GenAI system, at the time of procurement, there's a huge focus on governance and really making sure that the enterprise is making the right bets and understanding the risk of this third party system. But as I mentioned, one of the core things is not only at the buying time, but also how you apply it to use case so that ongoing governance is needed.
[00:26:53] Navrina Singh: And then the second place is if you are building your own in house AI application. Throughout your AI lifecycle, from the time you're designing it, developing it, and putting in production, we are finding that this single pane of governance requirement has now become like a de facto in most of the AI mature enterprises who know that AI is the way that they're going to win in this space and for them to continue to win in this space, they have to invest in governance.
[00:27:21] Navrina Singh: So I would say those are the two areas where you've seen governance really play a big role.
[00:27:25] Krishna Gade: So let's double take in those two areas, right? So let's talk about the custom AI app. The most cost, most common custom generative AI app people are building is a RAG application, you know, it's a glorified search engine. I have some documentation, unstructured data, maybe my policy documents, my frequently answered questions. I want to build a RAG application for my employees or my customer service department. Now, as I go through this, How do I institutionalize AI governance so that I can build up, you know, better trustworthy, responsible AI?
[00:27:57] Krishna Gade: Now, can you just walk through like this process so that, you know, some of our audience can learn from that.
[00:28:03] Navrina Singh: Yeah, absolutely. So I think even in the RAG based system, one question that you should be asking is, what is the foundation model you're using? Whether it is open source or whether you're buying it from third party, what is that particular, um, you know, um, core technology, is it built in house or not, or third party?
[00:28:21] Navrina Singh: Now for that, you obviously have to do your own complete risk assessment, and Credo AI can help you with that. But the second is, once you connect it to the right data sources within the organization, so let's say you're building a benefits FAQ for your employees. In that case, your system is going to be grounded in the HR knowledge that exists within your business.
[00:28:44] Navrina Singh: Now as you can imagine, once you're building these RAG based systems, you're still going to run into the same issues of PII. You're going to run into issues of hallucination. You're going to run into the issues of making sure that there is no confidential information that gets leaked, right? So this becomes really critical to have your governance, uh, definition of what is important for this benefits chatbot that you might be building.
[00:29:10] Navrina Singh: And, and against those metrics, there should be no PII leakage. We want to make sure that it is, um, you know, not hallucinating. It's actually giving you the right information for the company, even though it is grounded in the right data.
[00:29:24] Navrina Singh: How are you actually creating not only the technical controls, but also the process controls? And what I mean by process controls is before you launch this chatbot, who within the organization would have reviewed and attested to the soundness of the system.
[00:29:39] Navrina Singh: So what Credo AI does is it basically helps codify all your policy requirements into a set of controls and then within those controls very clearly Credo AI's alignment engine spits out a set of evidence that needs to be collected from your ops layer, it needs to be connected from your processes, and yes there is a human in the loop oversight here you because this is a critical system, this is what we would classify as a high risk application, because it is touching HR, it is touching a lot of sensitive information within the organization.
[00:30:16] Navrina Singh: So Credo AI's risk engine in this scenario would say, This is a high risk application, and based on that high risk application, here are the controls and the satisfying evidence requirement, and then the, our system basically goes and does this interrogation because we are connected and integrated into your organization, and then we do this ongoing risk and compliance engine.
[00:30:39] Navrina Singh: And as you can imagine, the output of that is a very holistic report around what are the potential risks, if there are risks, inherent risks that we've determined. for PII, for uh, hallucination, then we also provide out of box mitigations. So your governance team can actually start applying those mitigation controls to make sure that the overall risk starts to reduce, maybe it's not eliminated, but now you're reducing that risk and the output is a risk report, which basically gives you a view into uh, you know, for this HR application, we have reduced and mitigated as much risk as possible.
[00:31:19] Navrina Singh: The rest of the risk, we are being transparently sharing with the users, and then we are going to take accountability for it. So that's the process that, um, you know, companies go through as they are using Credo AI for RAG based systems. But I think risk classifications becomes very critical. If it is a very low risk application, let's say if you're just creating maybe some fun marketing images, which are not going to be used externally, that's a very low risk scenario.
[00:31:46] Navrina Singh: In that case, you will not have as much human oversight. And so Credo AI's engine will say this is low risk, and then it basically does automatic governance without involving you in the loop.
[00:31:57] Krishna Gade: How does it identify low risk versus high risk? Is it like human guided, like where the customer guides you, or you automatically identify the result of both?
[00:32:06] Navrina Singh: Yeah, so I would say it's a bit of both, but the first pass is all automated. So what Credo does is when you register your use case, we also collect a bunch of metadata information. So Credo AI is very thoughtful about, at the time of registration, trying to understand what is the use case. Where is it going to be deployed?
[00:32:30] Navrina Singh: Where are the data sources coming from um, etc. And based on that metadata, our risk engine basically does the first level pass to say, Is this low risk? Is this medium risk? And it is, is it high risk or critical risk? One of the beauties of our platform is we can also give you as an enterprise the ability to customize your risk threshold because what might be risky for Fiddler AI might not be risky, let's say, for Credo AI, right?
[00:33:01] Navrina Singh: So how do we actually give you and your teams the ability to customize what you define as low and high risk? And then based on that and the metadata collection, we do the first level pass that, hey, based on your company's risk appetite and based on what we've collected about your use case, we think this is a low risk or whether it is a critical risk application.
[00:33:24] Krishna Gade: Got it, okay, makes sense. So now, so shifting gears, right, so the second form of AI we just talked about is, is customers are buying, you know, procuring AI products, you know, I'm not, I'm not even talking about foundation models, I'm actually talking about full blown AI products with embedded generative AI or embedded AI, right?
[00:33:42] Krishna Gade: You know, where, how, if I'm like buying, you know, a bunch of these platforms, my marketing team, my CS team are all buying these things. How do I institutionalize AI governance for these type of tools? And I mean, especially you said you intercepted the procurement workflow itself, but how would that work?
[00:33:59] Navrina Singh: Yeah, so in that case, um, Credo AI, basically, obviously, as you can imagine, uh, we ourselves do a lot of third party independent risk assessment of these applications. You can actually go to our website and look at the assessment that we've done of the most adopted GenAI tools in the ecosystem by most of our customers.
[00:34:18] Navrina Singh: But as you are going through this procurement decision, you can actually invite, you know, that particular vendor to basically respond to some of the questionnaires. And now in this case, you can imagine, um, you know, because you are buying a third party GenAI application, you might not have the entire visibility into their AI use case.
[00:34:39] Navrina Singh: You might not even have visibility into their datasets. You might obviously not have visibility into how they've built the system. So in this case, yes, there is a dependency on making the sort of like the evidence collection conditional on the vendor. And so within Credo AI, you have a mechanism by which you can invite the vendor to basically respond to responsible AI questionnaires around, Hey, what is the, how, how did you build these systems? What were the sources of data? How did you do your testing? What were the output of those testings, et cetera, et cetera.
[00:35:15] Navrina Singh: And this is one of the core things that, um, as you can imagine, we've been actively working on sort of like a SOC 2 for Responsible AI, a more standard. So more to come on that next year.
[00:35:27] Navrina Singh: But one of the things that we are finding is that there is a need for standardizing what enterprises need and what the GenAI vendors need to provide to really make this procurement frictionless. And Credo AI has already operationalized that within our platform.
[00:35:42] Krishna Gade: Yeah now uh yeah, so that evidence collection and, and actually sort of monitoring part of it, right? Since, you know, we are specializing on the monitoring of the metrics, what, what do you think are the key metrics or indicators that organizations should monitor as part of their AI governance efforts? You know, say maybe search a generative AI to start with, you know, what have you found that are effective?
[00:36:08] Krishna Gade: What are most much, much more needed and important?
[00:36:11] Navrina Singh: Yeah, and I think one of the best ways to think about it, Krishna, is really at the end of the day, what is the application? What is the context of use, right? So as you can imagine, uh, some of the basic things that, you know, we are monitoring is, um, obviously the prompts.
[00:36:29] Navrina Singh: We are looking at Hey, what are the policies within especially an enterprise use of a large language model mostly delivered as a chatbot? What are the prompts which may or may not be allowed for an enterprise user to be able to use? So in that case, you can institute those policies within Credo AI. They get farmed out to the ops tools or guards like, you know, um, uh, that some of the LLM providers are providing. So really instituting what does good look like for the enterprise is one area.
[00:37:02] Navrina Singh: The second thing, as you can imagine, is there's this active monitoring requirement around how far is the data drifting or how far is there is a concept drift. So we are able to pull in that information, but for us at Credo AI, as I mentioned, we don't really care about those technical metrics.
[00:37:20] Navrina Singh: What we care about is how those technical metrics impact let's say reputational damage. Is this actually going to cause your system to be, uh, unfair, which does not align with your business practices? Is this going to cause your business, um, to really have maybe a non compliance situation? So when we are connecting to these technical tools, a lot of that is translated into business objectives for us, rather than just staying in the realm of what is the precision, what's the recall, what's the drift happening?
[00:37:52] Navrina Singh: For us, we are doing that translation actively for the governance teams.
[00:37:57] Krishna Gade: Yep, makes sense. Cool, uh, awesome. So, uh, you know, just, you know, moving to the third pillar that we wanted to cover.
[00:38:05] Krishna Gade: The elephant in the room, you know, especially when it comes to governance, is still regulation, right? And, you know, the EU AI Act is set to have a significant impact on AI regulation.
[00:38:15] Krishna Gade: You know, how do you see this act shaping the global AI landscape? And, you know, uh, what do you see other countries doing, especially the US? And, you know, where do you see that?
[00:38:24] Navrina Singh: Yeah. So, um, again, I, I always like to start the regulation conversation as if you as an organization are only caring about regulation and artificial intelligence, you should really count yourself out from the AI race.
[00:38:39] Navrina Singh: I know that is a very strong statement, but it is a very right statement as well, because I think there is such a huge benefit to, um, you know, similar to when we were building this, um, you know, the traditional software, we were really investing a lot in QA and testing and making sure it was robust. In artificial intelligence, because of the very dynamic nature of this technology and the socio technical impact of this technology, the same QA processes need to evolve into these governance structures.
[00:39:09] Navrina Singh: And I think that is really important to understand, that governance is not some fuzzy thing. It is a very rigorous evaluation, testing, oversight of your AI systems so that you can actually have these AI systems meet the objectives. So I do want to lay, start with that foundation. Having said that, as you can imagine, the impact of these systems are going to be so massive, especially in, I would say, high risk and critical infrastructure areas, like employment, like healthcare, like law enforcement.
[00:39:46] Navrina Singh: It is becoming really critical for, uh, countries to really think about what are they going to put in place for their employees. from regulatory structures perspective to make sure the human rights of their citizens are maintained and also that they have a really clear understanding of risks. So with that perspective, I would say, you know, obviously I have been very involved in not only the global regulatory ecosystem, but also working with the policymakers to really understand what's feasible, what's not feasible in the technical domain.
[00:40:23] Navrina Singh: And the example you mentioned is EU AI Act. So for this audience, EU AI Act is a really critical piece of, I would say, a regulatory movement that is being driven by Europe, uh, which really focuses on a risk based approach for artificial intelligence and for frontier models to really sort of think about, uh, Um, what is the risk classification of a particular AI application, and how does that impact the rights of the European citizens?
[00:40:57] Navrina Singh: And within that context, whether you are a US company that wants to operate in Europe, or whether you're a European company building for Europe market, there are certain requirements you have to adhere to and go through to be able to basically operate in Europe.
[00:41:12] Navrina Singh: So EU AI Act, I would say, is similar to GDPR, is one of the most in, you know, pivotal, uh, regulatory motion, uh, that has been created. And I do believe that it's going to have a Brussels effect, just like GDPR did, at least to make sure that there are right guardrails for artificial intelligence systems. By the way, there's still a lot of work in making sure it actually works.
[00:41:38] Navrina Singh: Um, and as you can, for in terms of timeline. The first set of requirements, uh, especially around AI literacy, go in in Feb of 2025, and then organizations and enterprises operating critical AI systems, uh, especially in Europe, they have another two, two and a half years to comply with the EU Act requirements, um, which I'm happy to dive deeper into what that means.
[00:42:04] Krishna Gade: Yeah Absolutely, and also there are newer ISO standards, right, like ISO 42001 that are emerging. Nowadays, we see a lot of compliance tools trying to say that they're incorporating these standards. Do you see like a complete alignment between the standards and what the regulations are asking for? Or like, you know, where do you see gaps if there are any?
[00:42:26] Navrina Singh: So, you know, right now we are still in early evolution of standards and regulation. Um, there is not a single harmonized standard. Having said that, there is a lot of great work happening, uh, between United States and our allies to make sure that that harmonization of standards happen.
[00:42:45] Navrina Singh: So the ISO 42001 standard that you've mentioned, uh, it's a really interesting standard that really focuses on, um, sort of like artificial intelligence management, uh, systems, which is really focused at the organization level. ISO 42001 does not go into measuring the evidence within your actual AI use cases, models, and datasets. It's still high level organization to make sure that you are building the governance posture for your, uh, your, uh, for your business.
[00:43:22] Navrina Singh: So, having said that, yes, there is a lot of great initiatives including NIST, National Institute of Science and Technology that developed the risk management framework now almost two years ago that obviously again is a baseline U. S., you know, standard, but there is a need for making what's called contextual profile.
[00:43:43] Navrina Singh: How does this apply to facial recognition? How does this apply to fraud? How does this apply to claim processing? That work right now is happening. So we're going to see much more, I would say, um, harmonization of, of, uh, AI standards.
[00:43:58] Navrina Singh: Uh, the ones that I would love for this audience to pay attention to is obviously the work being done in ISO, work being done by NIST, also Singapore Model Governance and the AI Verify has been doing some really exciting work, especially as we think about evaluations and benchmarks for large language models. MLCommons, we just, uh, actually introduced and launched the first AI safety safeguard for those who haven't paid attention to it. Certainly look at, um, the work that has been launched yesterday. So a lot of exciting stuff happening. However, we are not at a stage that there is a strong harmonization already put out.
[00:44:38] Krishna Gade: Got it, got it. So a couple of questions here, right? So do you see a world where here in the United States, at least, states, state governments going ahead and having their own AI acts or regulations.
[00:44:50] Krishna Gade: You know, California launched a bunch of bills a couple of months ago around AI transparency and especially an attribution of how the generative AI models are coming out with the contents. We talked about Colorado, there's New York City hiring regulation. How do you see the world? Do you see like a lot of fragmentation happening, at least in the interim, in the states?
[00:45:09] Navrina Singh: Absolutely, Krishna. I would say in the United States, we are seeing states at really sort of leading on AI policy and AI regulation. Um, so we are going to continuously see whether it is NYC 144, whether it is Colorado's SB 169, whether it is anything else specifically coming from California, Illinois, etc.
[00:45:33] Navrina Singh: We are going to continuously see states putting out more, I would say focused regulation rather than a federal level, single regulation. I don't think, um, I don't believe that's going to happen in at least the next couple of years.
[00:45:49] Krishna Gade: Yeah. Yeah. And so this, this obviously is probably putting some pressure like on GRC teams and in these large organizations, like where, where do you, how do you see them like being able to like be ready for these different statewide regulations across in the next one, two years?
[00:46:05] Navrina Singh: Yeah, so I, i, I would say that one, I use Credo AI, so I want to do a shameless plug for us because Credo AI, we also, uh, we have what's called our core policy intelligence, where we have a core policy team that is working very, uh, strongly across the ecosystem, United States and globally with the policymakers to really understand what's, uh, what is realizable especially, we, we want to make sure that AI adoption happens really fast because it is one of the most transformative technologies.
[00:46:39] Navrina Singh: And most of the businesses should be adopting artificial intelligence, but then how do you actually put in place the right guardrails? So, within Credo AI, we do have policy intelligence function, not only through our people policy team, but also within our platform. which is this engine which is consuming information and keeping, um, organizations very up to date on all the regulatory moments, but also it is helping codify that into controls and evidence requirement. So that's one thing.
[00:47:07] Navrina Singh: The second thing is I think this is why I spend a lot of time with the public private partnership because just like as a technologist and as an engineer, I didn't understand the policy side. So I've spent the past five years trying to work with policymakers, understanding how they think about regulation.
[00:47:26] Navrina Singh: We are also seeing policymakers engaging very actively with technologists like us to really understand what's realistically possible. and not possible. And I think, um, you know, if you think about some of the work that's happened in California with 1047, et cetera, there was a lot of multi stakeholder engagement to really understand what realistic guardrails might look like.
[00:47:50] Krishna Gade: Got it. Makes sense. Awesome. So, uh, maybe, like, we'll take an audience question here. Uh, there's one question on, around AI GRC seems to be focused on compliance of the generative AI systems in an organization. What AI frameworks are used? Can the GRC tools support industry frameworks, finance, healthcare, blah blah blah.
[00:48:09] Navrina Singh: So you know, I think this is where, um, I've, I've cautioned the industry from using GRC for AI as a core term. AI governance is the term, um, that we have coined actively, which basically takes into account the dynamic nature of artificial intelligence and also the context requirement to be able to put right guardrails around artificial intelligence and AI governance has been adopted as a term by G7, by obviously the White House, etc. So I think that's an important thing to understand.
[00:48:47] Navrina Singh: The, the second thing to understand is, um, can the GRC tool support other industry frameworks? Absolutely, they need to because as you and I were discussing, Krishna, AI is proliferating most of the use cases that were traditionally done in a very deterministic manner, right?
[00:49:08] Navrina Singh: And then most of the companies are now recognizing in many of those use cases to start using these AI capabilities. So we do need to apply existing frameworks, whether it is, for example, the MRM, Model Risk Management Framework, SR 117 in financial industry. How does that get applied to now AI based financial use cases?
[00:49:33] Navrina Singh: And so we are seeing the need uh, for AI governance to be able to not only be adaptable, but be able to, sorry, uh, Amber Alert, um, to be able to actually adopt, um, some of these existing frameworks.
[00:49:52] Krishna Gade: Awesome, great. Uh, you know, I guess, like, uh, you know, finally, right, uh, how do you see, like, companies, like, there's always this tension, and we talked about a little bit early on as well, you know, you know, like, governance, uh, versus, like, slowing down innovation, there's this also posture from the policy and the government that, hey, if you introduce regulations too early, then we would slow down innovation. Companies, you know, that might be, might be also feeling that, what would be the general advice that you would offer, uh, to, you know, AI teams that are developing AI and, you know, how do they, how do they think about these responsibility principles and governance practices?
[00:50:35] Navrina Singh: You know, it's, it's, um, I, I don't know the audience for this webinar completely, but one of the, one of the examples I always take is this notion of tech debt, right?
[00:50:48] Navrina Singh: Uh, when you are moving really fast and then we've done that throughout our careers, building innovative products. We start to accrue a lot of tech debt because we just want to launch something fast, put it out there. We might not do comprehensive testing and we just want it to be out in the market. And I think one of the things with artificial intelligence, we need to be very cautious about is most of the businesses that don't have governance have started to accrue what I call the responsible AI debt.
[00:51:18] Navrina Singh: And this debt is a very difficult one to sort of go back and fulfill because of the dynamic nature of these, these artificial intelligence systems. So one of the guidance and then, you know, things that we work with, what what, you know, I think I'm finding in the companies that are going to lead in this age of AI is first and foremost, they are moving away from this risk mindset to a very trust mindset, where it is not just, I'm going to have risk management posture because I want to be compliant to regulation, but it is a different mindset where it is how can I do this comprehensive oversight and accountability so that my, my systems and my applications can be trusted so I can get the ROI of artificial intelligence? So I think the shift from risk mindset to trust mindset is one that we are seeing actively.
[00:52:13] Navrina Singh: The second thing that we are seeing is AI governance is shifting left. And what that means is rather than waiting for a system, an AI system, to fail in production and then, oops, moments happening, to really start thinking at the time of designing these AI systems or procuring third party applications to bring governance and oversight at that. And that shift left is a really critical, I would say, um, uh, priority that we are seeing in AI leading organizations. And then the last, I would say,
[00:52:45] Navrina Singh: And the last shift that I would say, you know, we've made happen in this industry is where AI governance is not a dirty word. It's actually a word that is enabling you to create that competitive advantage for your business because if you don't have governed AI application, as I mentioned, you can count yourself out from this AI age already. because you can go very fast but you're going to have so many AI failures and to backtrack from those AI failures is going to be so much more harder because of the responsible AI debt that you've accrued.
[00:53:20] Krishna Gade: Awesome, that's, I really love that statement from shifting the mindset from risk mindset to trust mindset. I think that's amazing. I mean, there's this old quote that, you know, trust is easily lost, but very difficult to rebuild. So if you, if you, and especially with GenAI, you're building your GenAI apps for end users.
[00:53:40] Krishna Gade: So there's a new question. Um, this is for Credo AI. So how can Credo AI generate SOC2 type reports for responsible AI on third party vendors of AI tools? Each user of an AI tool will have its own specific data technology framework and business processes that may create specific risks. Do certifications apply on combination of a vendor AI app and a specific type of data and business?
[00:54:03] Navrina Singh: Yep, lots to unpack there, David. So thank you so much for that question and happy to have a sidebar after this webinar. But one of the challenges in this ecosystem that we are actively working to solve is those standardized system cards and vendor risk cards. Because one of the challenges is, right now, everyone is And it's, uh, sort of taking these vendors on their face value in terms of what are the risks within the systems, how they've tested it, where the data sources are, and many a times there is no visibility or ways to validate that.
[00:54:38] Navrina Singh: So there is an active body of work that is happening to sort of create, uh, and then think about a standard for responsible AI that is going to not only hold vendors accountable, but something I have actively discussed is mandating responsible AI disclosures for public companies.
[00:54:57] Navrina Singh: And this is where we are actively working with investors on educating what kind of questions need to be asked during financial reporting by public companies, but also on the private side, how can we institute standardized practices like AI model cards, AI system cards, impact assessment reports, that a lot of work that we are driving, um, to make sure that, you know, this, uh, this standardized way to test these third party GenAI systems is actually possible, and they can be held accountable to it because we don't have visibility into their systems.
[00:55:31] Krishna Gade: Awesome, wonderful. Uh, I think we are almost about time, uh, you know, thank you so much Navrina for spending an hour with us, uh, and illuminating us with the thought. I think, uh, basically, uh, the, the, sort of the, if you were to summarize the entire thing, it's, you know, develop a trust, trust mindset, you know, put in, you know, governance processes in your AI workflow, whether you're regulated or unregulated, because this is the best practice you can, you'll be able to build a better form of AI, uh, and, and And, and, and, and show ROI for your, for your, for your users and, and for your company.
[00:56:05] Krishna Gade: Uh, so with with that, you know, I think we, we come to a conclusion. So if you are, uh, building AI infrastructure, uh, definitely consider, uh, Credo in your AI stack. Uh, Fiddler and Credo are somewhat two sides of the same coin of building trust in AI. We are refocusing on observability, and Navrina and team have done some great work on governance. So, uh, and I think it really helps you to sort of, you know, align and measure, you know, uh, how your AI is working.
[00:56:32] Navrina Singh: Thank you so much.Thank you so much for having me, Krishna. I'm excited to do more. Thank you.
[00:56:38] Krishna Gade: Absolutely. Thank you