What the EU AI Act Really Means with Kevin Schawinski
In this episode, we’re joined by Robert Nishihara, Co-founder and CEO at Anyscale.
Enterprises are harnessing the full potential of GenAI across various facets of their operations for enhancing productivity, driving innovation, and gaining a competitive edge. However, scaling production GenAI deployments can be challenging due to the need for evolving AI infrastructure, approaches, and processes that can support advanced GenAI use cases.
[00:00:00]
[00:00:06] Krishna Gade: Good morning and good afternoon for those of you who are joining across the world. We are here to, uh, talk to you about, uh, EU AI Act. I'm Krishna Gade. I'm the Co-founder and CEO of Fiddler. I'll be your host today.
[00:00:22] Krishna Gade: We have a very special guest on today's AI Explained, and that is Kevin Schawinski, uh, CEO and Co-founder of Modulos AG. Um, Welcome, Kevin.
[00:00:33] Kevin Schawinski: It's a pleasure to be here, Krishna.
[00:00:35] Krishna Gade: Awesome. Uh, Kevin is the Co-founder and CEO of Modulos, uh, where he leads the mission to develop and operate AI products and services in a newly regulated era through their Modulos AI governance platform.
[00:00:49] Krishna Gade: Uh, Kevin, uh, maybe before we go into it, would you like to Say something more about yourself, you know, how you started the company, what, what you do, uh, for our audience.
[00:01:00] Kevin Schawinski: Yeah, uh, uh, uh, it's a pleasure to, uh, to join this forum here. So, um, I'm actually by training an astrophysicist and, and I got into AI and machine learning something like 10 years ago when we thought it was a bubble, but of course it was barely the beginning of, of, of what it is today.
[00:01:18] Kevin Schawinski: Um, and, uh, I, I left academia to found Modulos first thinking about. Um, Responsible AI, Trustworthy AI, and finding methods to help people build better models. And then about three years ago, we saw the early draft of what is now the EU AI Act. And we read this document, and we tried to understand what this implied, if it became a law.
[00:01:46] Kevin Schawinski: And we realized this is a fantastic opportunity to build a product for an era that's very different from the one we're used to, one where AI is a product. A regulated product, and it's a regulated activity, and that's how the Modulos AI governance platform was born.
[00:02:05] Krishna Gade: Awesome. That's actually a great segue into our current topic, Kevin.
[00:02:09] Krishna Gade: This has been, uh, last few weeks has been a big week for AI regulation, but in California we have had Gavin Newsom pass a bunch of AI regulations, including one AI Act. Uh, so maybe, uh, let's start with like. So, what is EU AI Act? You know, could you tell our audience what it is and the details?
[00:02:34] Kevin Schawinski: So, the best mental frame for thinking about the AI Act is GDPR.
[00:02:42] Kevin Schawinski: In 2016, the EU decided to set, effectively, global standards for data privacy and data protection. Uh, and even though it's a European regulation, it, uh, had echoes, uh, around the world. In the United States and other countries, we had to start thinking and, and, and work seriously on data protection and, and privacy.
[00:03:05] Kevin Schawinski: The EU saw this and decided that one of the next things they wanted to do is regulate it. AI. So, um, GDPR covers the data side, and they wanted to cover the, the algorithm side, and we can go, uh, uh, further, uh, later on in what the EU really means by AI, because it's not what we have in mind when, when we hear the word.
[00:03:26] Kevin Schawinski: Um, The EU did sort of early studies on how they would want to go about this, and they settled on two decisions that are fundamental, but that were not obvious. I don't think these were obvious decisions. And now when we talk about AI regulation in whatever country or forum we're talking about, these two decisions are more or less assumed to be the way we do AI regulation.
[00:03:52] Kevin Schawinski: So the first one is That we don't regulate the technology, so the AI Act doesn't tell you how many layers you're allowed to have in your neural network, or what your F1 score needs to be. It regulates the product that contains AI, and actually doesn't care what kind of AI it is. It could be the latest LLM, or it could be very simple logistic regression.
[00:04:12] Kevin Schawinski: You've now built an AI product, and so you're covered. That's decision number one. And then decision number two was to say the obligations you have, the things that you need to do in order to have a compliant product. scale with the risk of the application. So the same AI model, um, when it's used to decide whether you're, you, you get credit or not, is much riskier than, than if it decides whether a certain email is spam.
[00:04:43] Kevin Schawinski: And so this risk based approach is something that's being, um, copied by basically Uh, everyone. And then as a final, uh, point that I think is interesting and is not well known about the AI Act, is that it, it has a certain structure, and that structure is inherited from the regulation that many of the writers worked on before the AI Act, and that's the medical device regulation.
[00:05:09] Kevin Schawinski: So, the regulation that if you want to bring an x ray machine or, or an insulin pump to market in Europe, um, That's the medical device regulation, and the AI Act has a similar structure. So when you think about, okay, what is this AI Act all about? How does it work? Think about the steps you would take, um, if you wanted to bring a medical product to market, and then a lot of things fall into place.
[00:05:33] Krishna Gade: Absolutely. Yeah. So I think, you know, uh, a lot, you know, uh, one of the things that happened this week is like the governor of California vetoed s sp 1 10 47, although there were some other, you know, regulations like SB 9 42 were passed, right? So there's always this concern when people talk about AI regulations, whether it'll add bottlenecks, uh, to AI development, and there's always like the crowd is like divided upon that.
[00:05:57] Krishna Gade: So, can you tell us what EU AI Act will do here? What's your view? Will it help open up the AI market in the EU or add further bottlenecks?
[00:06:08] Kevin Schawinski: So the EU has been broadly criticized for stifling, uh, innovation with the AI Act, you know, the U. S. invents and the EU regulates. And I think that in this situation is slightly unfair, because in the EU, uh, there are now clear rules and clear timelines on what you're supposed to do in order to have a product on the market.
[00:06:32] Kevin Schawinski: Um, in the US, where of course most of the AI innovation happens, we have a much more piecemeal, haphazard approach to AI regulation. We have the executive order from President Biden, basically directing the agencies to draw up standards, and actually already start enforcement. So in Europe, we still have 22 months left.
[00:06:51] Kevin Schawinski: In the United States, the regulatory agencies are already bringing cases to court. And of course, because there's no federal law in the United States, the individual states are cooking up their own laws. So California, of course, is working on many of those, but most of the other states are looking at their own laws, and Colorado's already passed their own AI Act.
[00:07:15] Krishna Gade: Yeah, makes sense. So then, like, how should a U. S. company think about the EU AI Act, you know, how does it affect, you know, say, a large enterprise company that may have a distribution of their products or services in the EU, or even maybe a startup that wants to capture the EU market?
[00:07:34] Kevin Schawinski: So, uh, yeah, you had exactly the right phrase at the end, if you wanted to capture the EU market.
[00:07:40] Kevin Schawinski: Um, think about it again, like GDPR, it's a European rule. But as long as your data has anything to do with Europeans, even if you're a US company, you have to start thinking about it. So there's an extraterritorial aspect to these European laws. This is, this is by design. This is not, not a, not a mistake.
[00:08:01] Kevin Schawinski: This is absolutely wanted. And so if, if you want, if you're, if you're an American company, whether you're a startup or, or whether you're a giant enterprise, uh, you will fall under the AI Act as long as you're Product or service is available in the EU, and for that it might be sufficient that if I'm in Germany or if I'm in Spain and I can log on to your website and I can procure a service, that's certainly sufficient, and there can even be cases where If the output from your system has a material effect in the EU, even though both your company and your customer are in the US and Canada, if that output has a material effect in the EU, you might also be covered.
[00:08:45] Kevin Schawinski: And so, even though you're not located physically in the EU, this is definitely something you should pay attention to.
[00:08:56] Krishna Gade: Yeah, makes sense. So, let's actually step back a little bit, right, uh, you know, people talk about model governance all the time, uh, especially in the last few years. As you articulated, there are many governments rolling out guidelines and many companies advertise themselves as Uh, uh, you know, subscribing to responsible AI principles, but you know, what are, uh, you know, some of the things that people need to think about model governance when it comes to generative AI?
[00:09:23] Krishna Gade: Because I, I feel like, uh, machine learning has been a well studied problem, especially when it comes to model governance. But what's your point of view when it comes to generative AI, which is, you know, different from machine learning and how does one think about model governance for generative AI?
[00:09:39] Kevin Schawinski: So the, the, the approach that the AI Act takes, and that a lot of these regulatory frameworks take, is, is ultimately it's about consumer protection.
[00:09:49] Kevin Schawinski: And so, what's inside the box is, is less important than, than, than what the box does. That said, GenAI brings its own challenges to dealing with questions like transparency, like traceability, and of course then the challenges of bringing GenAI to production when you think about guardrails and AI cybersecurity.
[00:10:14] Kevin Schawinski: The more recent regulatory frameworks, both the thinking in the EU, but also in the United States, focus specifically on what are those challenges. Technical challenges, uh, to do with GenAI, and I think there's a lot of thinking, there's a lot of good work going on trying to find best practices. So I don't think the, the laws anytime soon are going to tell you how, how to use LLMs, but a lot of the best practices and the technical standards are talking about,
[00:10:45] Krishna Gade: So there's one difference in GenAI compared to traditional machine learning, right?
[00:10:48] Krishna Gade: In machine learning, you, you know, most teams train their model from scratch. You know, they have like, uh, you know, they might use libraries like TensorFlow, like Scikit learn. Whatever, they can, they can train their own model on their data, but with GenAI, you have all these pre-trained models that vendors give you, you know, Microsoft, you know, Google, all these big players, open source, as well, there are lots of these pre-trained models that you could then leverage, you know, do your own prompt engineering and fine-tuning.
[00:11:19] Krishna Gade: Now, how do you then apply governance? You know, um, What, and then first of all, how do you even regulate, you know, because there's this conflict between the developer of the pre-trained model and the deployer, uh, so could you just talk about those, uh, nuances?
[00:11:38] Kevin Schawinski: Sure. So, GenAI has, uh, additional challenges for exactly the point that you outlined here, which is that you're taking a pre-trained model from someone, and then you're building your own product, um, uh, based on that.
[00:11:54] Kevin Schawinski: And the way the regulators tend to look at it is if you're building with it, you assume A lot of the responsibility for it and, and simply saying, well, I took the pre-trained model from, from, from this provider, that provider, um, is, is not going to be much of a defense if your product causes harm. And so that means in this new, uh, regulated AI era, you should think very carefully about who's pre-trained model or who's, uh, general purpose AI is the term the Europeans now use.
[00:12:26] Kevin Schawinski: Uh, you're going to use the, the Europeans want, uh, well, want required the general purpose AI models that, that is the big pre-trained, um, LLMs, um, to make certain disclosures within 12 months. So this is now in, in 10 months where they have to deliver amongst other things, a very detailed model card, uh, that ought to include Um, details on what data the model was trained on and how it was trained and what safeguards it includes.
[00:12:54] Kevin Schawinski: Now, if you're going to be building with those models, you want that information, um, not just from a technical perspective, but also from that regulatory perspective, so that you know exactly what you built with and what risks you took by using the model. And I think a big question is just how detailed those disclosures will really be.
[00:13:15] Kevin Schawinski: And the question that I have is whether maybe some of the big, uh, GenAI foundation model companies might choose to just not make the disclosure and say, look, it's a model not for use in the EU. That, that might well happen.
[00:13:30] Krishna Gade: Yeah. So what you're saying is, uh, so just to give an example, if like an, I'm enterprise customer, like a bank or an airlines company that is operating in the EU and using one of these pre-trained models, then I have to have a model card that not only tells about how I fine tune this model, but also how the original model The pre-trained model was trained.
[00:13:53] Kevin Schawinski: Yeah, because it's a product safety, product liability approach, not a technological approach.
[00:14:00] Kevin Schawinski: So, just as you can't make a toy and just put a component in there that you have no idea what it is, so you can't build an AI application in your bank or in your insurance where you don't know what's in it. Right, you, the saying, well, you know, I just, I got it from somewhere, that's not going to cut it.
[00:14:18] Kevin Schawinski: And this change, by the way, goes further, right? Um, if you, uh, if you take one of those pre-trained models and you build an AI product out of it, right, you wrap it, you have an API, you license that product, and then somebody else, Purchases it. Um, you then pass that responsibility along, like you're responsible and then the person or the company licensing your product also takes on some of the responsibility.
[00:14:45] Kevin Schawinski: So there's a whole new, um, set of liabilities along the AI supply chain
[00:14:50] Krishna Gade: that, that will keep going. So can you just articulate a little bit more of what should go into this model card? What should I be prepared in terms of. Uh, showcasing it to the potential regulator that might ask for this thing, you know, uh, what are some of the aspects, uh, that I need to think about as a customer.
[00:15:10] Kevin Schawinski: So the OECD model card and dataset cards are generally considered a very good example of what that should look like. It's, by the way, it's also the template that HuggingFace uses, so if you get the HuggingFace template, you have a pretty good start. If you're, if you're providing one of those GenAI models, there's a whole article in the AI Act that you should study, and there are some things in there that I, even I, still have some open questions on what that really means.
[00:15:42] Kevin Schawinski: And I think one of the things that's going to be contentious is that you have an obligation to tell what data you trained on at the level of detail so that copyright holders can enforce their rights. And, of course, that's a very loaded subject. Thank you very much. Got it,
[00:16:01] Krishna Gade: got it. And so, uh, there was an interesting question relevant to what you just talked about.
[00:16:06] Krishna Gade: You know, if it's a pre-trained model, how does it comply with the EU's right to forget, you know? Like if you, if you, so like, you know, how are you gonna ask the vendor to like forget some data or like, you know, forget some, you know, training, training examples?
[00:16:20] Kevin Schawinski: This is a fascinating question. I've talked to quite a few, um, uh, lawyers from different European countries about this.
[00:16:27] Kevin Schawinski: I think there's no strong conclusion to this question. So let me backtrack and explain a little bit what Krishna, what you're addressing here. So if I'm your customer, Uh, under the, uh, GDPR, I have the right to tell you, look, delete all my data, I don't want you to know anything about me anymore, and you have to comply.
[00:16:46] Kevin Schawinski: So now, if I've trained a hundred million dollar or a billion dollar foundation model on that data, and I come to you and say, You've got to forget everything about me. There's it's not clear how you would do that. I know that some companies, um, have essentially used guardrails, where they have, um, uh, filters during inference, where they will try to filter out if somebody has, um, uh, essentially pressed that right.
[00:17:16] Kevin Schawinski: But the question whether I can ask you to essentially delete me from your model weights is an open question, and we'll have to see what the courts decide.
[00:17:25] Krishna Gade: Right, because it's, it's like, you know, uh, even if it's a fine-tuned model, uh, it's, it's, it's, it's, you know, you don't have like, uh, controls into the original data that, that, that was used to train the pre-trained model.
[00:17:40] Krishna Gade: So, so it's going to be interesting how this regulation gets, gets implemented. Uh, now, so. Maybe going into the little bit of more details of like EU AI, let's like, let's walk through some of the practical steps, right? What should an organization do to implement, uh, transparency and responsible AI required by the AI Act?
[00:18:00] Krishna Gade: So for example You know, things like, you know, monitoring, governance, model testing, you know, like, could you walk us through, like, some of the practical steps? What are the processes and, and, and things that an organization should put in place?
[00:18:15] Kevin Schawinski: Sure. So, so the first thing that, that everyone should do, as long as they suspect they would be in any way covered by, by the AI Act, is make, make an inventory of all the AI applications you have.
[00:18:28] Kevin Schawinski: And if you do that, you'll be surprised by how many more And here I'll just do a 10 second detour. The definition of AI is not a scientific or technical one, it's a legal one, and very simple algorithms, a linear regression, an if then else statement, could, be considered AI under these new legal definitions.
[00:18:55] Kevin Schawinski: In fact, the U. S. is more practical. They more and more use the phrase automated decision system, and then it's much more helpful to realize what we're really talking about. So you need this inventory, and you need to figure out how risky those applications really are. And one thing I would stress, this is extremely important because there's almost no time left.
[00:19:16] Kevin Schawinski: Find the prohibited. applications. The prohibitions kick in at the end of January next year, and if you're found to be operating a prohibited AI system, that's a 7 percent of global turnover fine. So the prohibition that you're most likely to be operating today are so called real time biometrics and real time emotional recognition manipulation systems.
[00:19:43] Kevin Schawinski: So the classic case here is those HR interview softwares. Where you basically, you know, you see my video here and then on the side it says like, Oh, Kevin, he's really nervous and he's probably lying right now. Like, those are probably illegal very, very soon. So make sure you take those out of commission.
[00:20:01] Kevin Schawinski: Beyond that, um, the steps you should take is you should have like, um, a multi stakeholder process to set up your governance processes. And the most important thing to do there is have an institutional policy on how you're using AI. This is to be a document both about processes, but also about your values.
[00:20:22] Kevin Schawinski: Like what is it you expect of your AI systems? What is your, what is it that you expect of your developers and your users and, and your customers? Um, then on the more technical side, um, the AI Act expects you to have a quality management system. So this is, uh, analogous to the quality management system that I'm sure many of the attendees have in their company already.
[00:20:45] Kevin Schawinski: But it has to be one specifically for ai. So, it operates slightly differently from a traditional quality management system. And the other thing you have to have is a risk management system. So again, most bigger companies will have one or more risk management systems, but this has to be an AI risk management system.
[00:21:06] Kevin Schawinski: And this is where your question about how does it go into the technical field, um, AI risk management means you have to continuously monitor the risks posed by your AI system, and If you have evidence that the risk is above a threshold or has increased from managed to not managed, you should act on it.
[00:21:27] Kevin Schawinski: So that means whatever model you have in production, you should be monitoring and making sure that the risks it poses And these could be risks to, uh, for discrimination, or for, for, for error, or for, um, actually in, in, in environmental risks, and, and other types of risks as well, that, that you constantly monitor and mitigate them.
[00:21:51] Kevin Schawinski: And setting that up in practice is, it can, can be a challenge, um, but of course it's something that connects to, Infrastructure that if you deploy AI models that you already have because you should be monitoring your models and you should know how they're behaving.
[00:22:07] Krishna Gade: Awesome. So that's a great point, right?
[00:22:09] Krishna Gade: So, uh, and of course we are an AI monitoring company and one of the things that we get questions from our customers who are regulated is what Should be my thresholds, you know, how do I monitor it? What thresholds would pass me regulatory oversight, you know? And so, does EU AI act, uh, again, I know the answer, but I think for the audience, like, would it prescribe the thresholds?
[00:22:32] Krishna Gade: Like, say, hey, you can't have below, you know, 80 percent hallucinate, or 20 percent hallucinations, or you can't have, like, this much of bias in your models and things like that.
[00:22:44] Kevin Schawinski: I wish, I wish there was a nice appendix to the AI Act that give you, like, quantitative answers to all those questions, and of course it doesn't, right?
[00:22:53] Kevin Schawinski: Um, a law is written by lawyers and by parliaments, and it has very general language, so the AI Act will say something like, Your training data must be sufficiently representative towards the groups or individuals affected. So that's a nice legal phrasing, but as an engineer, as a data scientist, what am I going to do with that, right?
[00:23:16] Kevin Schawinski: This is a real challenge. So the first principle I would take here is, um, in the absence of like a long record of court cases and decisions, um, As long as you're engaged in a good faith effort, I think you're going to be fine. So this means you have to have a well maintained risk management system where you explain, look, these are the, these are the parameters, or these are the metrics that we're monitoring.
[00:23:42] Kevin Schawinski: And we've decided, like, look, we just don't want this fairness metric to exceed this threshold for Reasons that might not be too, too deep, maybe something you, you, you, you determine empirically or think sounds reasonable. As long as you have a process in place, as long as you show that you adapt, I think in most cases you're going to do well if challenged.
[00:24:04] Kevin Schawinski: I think this will change over time, and I think we'll get industry standards, um, in specific verticals.
[00:24:11] Krishna Gade: That's actually a good segue to this new industry standard called ISO 42001 that is emerged, right? Uh, could you talk about what that standard is? Uh, how, uh, you know, an organization can get certified for that standard?
[00:24:26] Krishna Gade: And what does it mean to get an ISO 42001 certification? How does it protect me from a regulatory oversight?
[00:24:33] Kevin Schawinski: So 42001 is called AI management system. And so If you're familiar with information security, there's ISO 27001 and there's also SOP2, which if you run a software company, you have to go get that to prove that you're dealing with information security well, and if you're buying software for an enterprise, right, you want to have vendors that can prove that they're responsible and proactive in their information security.
[00:25:03] Kevin Schawinski: 42001 is the equivalent for AI systems. It's an ISO standard, which means it's international, which means it's not specific to the AI Act, and it basically describes how to set up the management system for AI. So this includes the quality management, the risk management, the responsibilities, the taxonomies, all the things that you need, generically, in order to set up this management system.
[00:25:31] Kevin Schawinski: Now the good thing about 42001 is this is a finished standard. It was published late last year and you can get certified for it today. This is actually what we as Modulos is one of the things that we help you do. is help you build a management system that can be certified so that you can show to your vendors, your customers, that you have a trustworthy system.
[00:25:53] Kevin Schawinski: Now, is this enough for conformity with the AI Act? That's a political question that doesn't have an answer today.
[00:26:02] Krishna Gade: Got it. And so, so now, like, when you actually are setting up these standards, right, so, uh, one of the things that, uh, people ask about it is, what are things that I should monitor from an AI application perspective?
[00:26:14] Krishna Gade: What are the metrics that I need to monitor, you know, will, are they sort of specified in these, uh, Uh, sort of, uh, the certifications that, that we talked about, or are they specified verbatim in this AI Act, or is it something that I have to sort of, as a data scientist and model developer, need to come up with?
[00:26:32] Kevin Schawinski: You still have to come up with, based on your use case, and, and what your company's mission is, the, these ISO standards, they tell you about processes, they say, this is the steps you should follow, but they don't tell you, and then your equalized odds fairness needs to be, uh, consistent with this. Um, they don't do that, uh, in particular also because, because they are international and generic.
[00:26:56] Kevin Schawinski: Um, that, that guidance that we all want, like what metric do I need to hit, right? This is very much an engineering question to ask. We all want it, but I, I think we're years out from having, um, clarity on that.
[00:27:09] Krishna Gade: Got it. And what are some of the most common metrics that you are seeing amongst, uh, people that, customers and organizations you're working with as they try to prepare for, you know, AI, EU AI Act?
[00:27:21] Krishna Gade: What are some of the things that they're trying to monitor and evaluate?
[00:27:26] Kevin Schawinski: So, um, when I talk to customers about fairness metrics, one of the most interesting things that happens is that it puts people on the spot to actually define their values and write them down in code, and it's always fascinating to watch.
[00:27:39] Kevin Schawinski: Because everybody thinks, like, well, let's just make the model fair. And then you have this discussion of, well, Do we prefer equality of opportunity or do we prefer equality of outcome? And you get to watch these fascinating discussions as companies start to figure out what are our values. Now in Europe, there's basically, there's nothing to hang yourself on and guide you.
[00:28:00] Kevin Schawinski: What's interesting is that the one country that really has a quantitative standard you can use today is the United States with disparate impact, which is a long tradition in jurisprudence. And so at least that is something that you can use in a quantitative way. City of Way.
[00:28:16] Krishna Gade: Got it, got it. And what about other things?
[00:28:19] Krishna Gade: Like, for example, does EU AI talk about accuracy of models, like hallucinations, which is a very big topic these days with generative AI? And, uh, have you seen people, you know, um, track metrics around those things?
[00:28:35] Kevin Schawinski: So I see people tracking a lot of these metrics that that go into a confusion matrix if it's a classifier.
[00:28:41] Kevin Schawinski: I see a lot of people working with guard, with LLM guardrails. This is a very hot topic right now, um, but then when we step back and we think about the process, like, what are you trying to achieve? So you're trying to build a GenAI application, you're trying to build a chatbot, so you think about what are my priorities, what are my risks, right?
[00:28:59] Kevin Schawinski: That my chatbot is toxic, that my chatbot leaks. PII, that my chatbot accidentally sells the company for a dollar, right? And so by, by listing those risks and saying like, okay, if the chatbot sold, sold the company for a dollar, that would be really bad. Let's prevent that. That then leads you to setting up the guardrail, and then you go back and say, okay, now we've mitigated, not eliminated, but if we mitigated the risk of the chatbot doing something we, we don't want.
[00:29:27] Kevin Schawinski: And that's the process that, that Eyes of 42. 01 But also the AI Act and other standards. That's what they want you to do. Think about what could go wrong here, and what technical means do we have to mitigate it? And how well does it work?
[00:29:42] Krishna Gade: Got it, got it. So I guess, like, I think, um, you know, with the fact that there is no specific set of metrics that authorities want you to monitor, um, So it comes down to the process, right?
[00:29:54] Krishna Gade: So like, is there an expected process that, uh, that is outlined in this regulation? This is one, one of the audience questions as well, you know, is there an expected process that one needs to follow to, to be regulatory proof?
[00:30:08] Kevin Schawinski: Very much so, and at a high level it looks like this. Think about the risks that your system poses to something, let's say equality.
[00:30:18] Kevin Schawinski: Um, you list those, and you assess them by how serious they are. Like, how likely is it? How high is the impact? What's the impact on the business if it happens? And then once you've assessed, okay, we have a risk from our, from our system here to, to discrimination, okay, we've assessed that we need to do something about it.
[00:30:42] Kevin Schawinski: Let's think about strategies for, for, for addressing it. Do, can we have a simple solution here? Do we need to use a totally different model? What, what do we do here? You assess the solutions and then you implement them. And you see how you do, and if you've mitigated the risks efficiently, you say, I've succeeded, and then you monitor it, right?
[00:31:02] Kevin Schawinski: Because we all know models start misbehaving after some time, and so maybe after a while it comes back. And then you start the process again, you say, what do I do here on a technical level to get the model to be within parameters again? That's at a high level of the process.
[00:31:18] Krishna Gade: Yeah, it's very similar to like, you know, we work with banks in the United States that have this SR117 guideline that you probably also know, the model risk management.
[00:31:25] Krishna Gade: And they have the same sort of like, you know, do you have a process to Test and verify your models before you deploy, and are you monitoring them? Do you have periodic reports? And, and then, you know, um, and then it's sort of like the sophistication of the process is more, more or less gives, uh, the company Regulatory blessing, you know, you have, you know, we have heard oftentimes from fintechs and banks that the more sophisticated tooling and processes that you can show to a regulator, the more satisfied they would be in some ways, you know, giving you the, the, the, the sort of the, the blessing that you, you're, you're, you're okay on, on, on the, on these regulations.
[00:32:02] Krishna Gade: Um, cool. So, so I guess, um, When it comes to sort of, uh, uh, sort of practically getting one of these certifications, right? Um, you know, uh, you mentioned, you know, Modulos does that. Maybe, you know, could you talk about how you go about doing it? Um, uh, you know, what are some of the assessments that you do in the process to help a company get certified?
[00:32:26] Kevin Schawinski: So we, we give you an out of the box, um, AI management system platform. So it basically, it has all the control frameworks, it has the risk management system, it has the way to, to, to integrate into, uh, whatever monitoring stack you're, you're using. And we give you sort of the, the, the guidance on starting with the documents maybe you already have and, and, and starting to set up the system.
[00:32:51] Kevin Schawinski: Um, we're a software company, so many of our customers actually also then engage, uh, a consulting partner, um, that has experience with this to help them set this up. And, and then there's a process. It's a, it's a little bit similar to, to, to other audits where, um, Um, after you've set up the system, you do a phase one audit, and then you have to operate the system for a while, and then you can get your phase two audit done, and then you have your, your, your certification done.
[00:33:21] Kevin Schawinski: And then you need to, of course, maintain it. I mean, nobody's done this for 42001 yet, but then maybe every year, you have to, have to be, uh, uh, reviewed by the auditor to maintain, uh, the certification. And so we give the infrastructure for that. We give the automation for that. Um, and then we work in concert with consulting partners and auditors, uh, to help, uh, customers get there.
[00:33:46] Krishna Gade: Awesome. That's great. So, so then, uh, switching gears, right? So, uh, it seems like a lot of the responsible AI governance, um, uh, people think that, uh, it's, it's a large company thing, you know, you know, if you're like a very large enterprise and you have to worry about it, but how do you think this, uh, EU Act will impact startups?
[00:34:08] Krishna Gade: You know, there are so many new startups that are coming up with generative applications. And what are the opportunities and challenges, uh, you know, maybe like both startups and these larger enterprises face, you know, in terms of, uh, getting ready for this regulation.
[00:34:25] Kevin Schawinski: I think startups and large enterprises have slightly different pain points or things that they're going to find tough.
[00:34:31] Kevin Schawinski: Um, I talk to a lot of startups and they're sort of scared. They think, uh, this is going to cost so much money, so much effort, so much time. How can we do this? Maybe, you know, the American ones, maybe we should just avoid the EU market for now. And, um, I would, uh, actually, uh, suggest that they look at it the other way around.
[00:34:51] Kevin Schawinski: Um, having responsible AI and, and becoming certified is No different from doing it for information security or for data protection. These are quality signals to your buyers, to your customers. These are promises to your customers that you are reliable and that you are trustworthy. And so, instead of seeing it as a blocker, I think the sooner you do it, the more you have an edge over your competition who won't do it.
[00:35:19] Kevin Schawinski: And so, on the other side, you have the enterprise, uh, organizations. The first thing they do is, when they're going to be buying AI from now on, is say, Where is your 42001? You know, until now, they've asked, Where is your SOC 2? And I think
[00:35:35] Krishna Gade: So, just to clarify that, if I'm a generative AI startup, I'm actually selling my SaaS software to a large enterprise.
[00:35:42] Krishna Gade: You're telling me that that large enterprise might ask me, uh, to showcase like, hey, do I have a 42, 001 certification for my, just like how we get asked for SOC 2 compliance and things like that.
[00:35:53] Kevin Schawinski: This is already happening. There's several large tech companies I'm aware of who have notified their vendors to produce the 42001 ASAP or be dropped.
[00:36:06] Kevin Schawinski: Yeah, and and this is the other side, right? As an enterprise in this new era of regulated AI, you want to be sure that you know what it is you're buying and you want to be satisfied that what you're buying is The people behind it, the company behind it is trustworthy and doesn't just do it once, but can continue to maintain, uh, AI that isn't going to get you in trouble.
[00:36:32] Krishna Gade: Yeah, makes sense. I think so what, what it, what it means is it's not just like, uh, Uh, checkbox that you want to complete, it's actually an enabler for your business, right? Whether it's a small company or a large enterprise, this is going to be an enabler for you in the future. I think so. Yeah. So maybe like, you know, related would be like, how does this EU AI act differ in its impact across industries?
[00:36:52] Krishna Gade: You know, we have like, you know, different industries, everyone is now impacted by AI, retail, finance, healthcare, you know, are there specific provisions tailored for each of the industries or how does it impact these different verticals?
[00:37:04] Kevin Schawinski: And the AI Act is funny, so it has this famous Annex 3 where it says what's particularly high risk, and it's this strange mixture of things that are highly specific, and then things that are so generic where we'll just have to find out what it means.
[00:37:19] Kevin Schawinski: So for example, anything to do with HR, or with education, or with administration of justice, or public services, anything the government does, is considered high risk. But then you have provisions like Critical digital infrastructure, and now we can, uh, have a long discussion about whether, whether, uh, something is critical digital infrastructure or not, um, the, the way I read it, as long as if the infrastructure fails, there could be some real world harm, it could be considered critical.
[00:37:52] Kevin Schawinski: But we don't really know, ultimately, where that line will be. So, um, yes, there's some verticals that are directly name checked, but this, these sort of broader categories mean that could, it could, uh, mean almost anyone. By the way, this is different from the approach in, in the United States and also the approach by ISO, um, where the sort of risk tiering by vertical doesn't apply at all.
[00:38:19] Kevin Schawinski: So in, in the ISO standard, You're supposed to sort of figure out yourself how risky you are and the same thing with the NIST AI Risk Management Framework. You go through it and you see what, what are your risks. There's no guidance. Oh, if you're in healthcare, then, then you need to do this.
[00:38:35] Krishna Gade: Yeah. So, so basically then, um, there's a related question as well from the audience on determining the risk classification, right?
[00:38:41] Krishna Gade: You know, there's so many conditions, you know, you mentioned, you know, the regulations themselves have differences in terms of how they approach it. Uh, so then like, how do, like if, for example, You know, if I'm trying to, uh, if I'm like in a particular, even within a vertical, right, so if I'm a healthcare company, I may have models from clinical diagnosis to customer support, right?
[00:39:01] Krishna Gade: So with various, varying different risks, how do I go about this risk classification? And how do I get ready for this, you know, AI regulation in general?
[00:39:11] Kevin Schawinski: So, okay, this is, this is perfect as a joint question. So I think in general, you should have some AI governance in place, and you should be ready to, um, substantiate whatever AI system you use that you understand its risks and that it meets certain quality levels.
[00:39:33] Kevin Schawinski: The legal question of whether in the EU you're considered high risk is of course very consequential, because if you are high risk you have to go through this thing called conformity assessment, which is basically like an audit process and you have to register with the EU and you have to do a lot of bureaucracy.
[00:39:49] Kevin Schawinski: And of course if you get that wrong that could be could, uh, could be very bad. Um, the way to think about that is you should treat all your applications with some level of care so that if it turns out, oh, You should be in the high risk category, that it's maybe a lot less work for you to turn that around instead of having your product basically be taken off the market for six months or a year.
[00:40:15] Krishna Gade: Yeah, makes sense. Cool. So, so I guess, uh, you know, what do you see the impact of the EU AI Act influencing the global AI standards beyond Europe? I mean, it seems like everyone is You know, copying this stuff, or like, what's your take here?
[00:40:32] Kevin Schawinski: I used to joke that even the Pope now is calling for AI regulation, because he really did.
[00:40:39] Kevin Schawinski: What I find interesting about the process is that These two assumptions that we regulate the product, not the technology, and that it's risk based, was copied basically by almost everyone. There's few, few places where AI regulation really diverges from that. Actually, one of those places is China, though I know less about the Chinese AI regulation.
[00:41:00] Kevin Schawinski: So the structure is going to be similar in most markets, most countries around the world. The details will differ. Um, I find it interesting and, uh, how in some countries, um, You notice that They more or less cut and paste from the AI Act, and, and I would caution against that because of course that language is heavily loaded with, uh, Brussels, uh, insider language that is very meaningful, that if you cut and paste it into another country's culture and language suddenly just make no sense anymore.
[00:41:33] Kevin Schawinski: So caution against that. Um, one trend we're seeing, uh, particularly in the US, thinking forward and thinking more about GenAI. He's going back to this more technology specific regulation, particularly saying this is how we want you to use LLMs. These are best practices for LLMs. That's the, I think, the biggest divergence we're seeing right now from the EU approach versus the US approach.
[00:42:00] Krishna Gade: Yeah, so it seems like most people are taking the The common parts of EU Act and sort of rolling out in their AI Act, so, so it's becoming kind of like the, the standard across the, across the world. So there's a related question, right? So like, essentially, you know, is, is, do you expect EU regulation to adjust, uh, for businesses, you know, that want to operate, uh, in Europe, uh, from not receiving this painful, painful, potential painful experience?
[00:42:28] Krishna Gade: How do you feel like? This is now like a baseline that they're going to start and developing the regulation from.
[00:42:36] Kevin Schawinski: I think, uh, and of course predictions about the future are always dangerous, but I think the AI Act more or less gives us a baseline. And I think it will happen similarly to what happened with GDPR.
[00:42:48] Kevin Schawinski: GDPR set the standard for privacy. Some countries then came up with their own proposals, but there are variations on the theme, right? The, uh, CCPA is a variation on GDPR, and I think we'll see something, uh, similar, uh, compared to the AI Act. That said, For, for the, the keen observer of EU politics. Uh, the, the commissioner that was responsible for passing the act , uh, resigned, uh, recently and in the new vandel underlying commission, the responsibility that, uh, for AI has actually been split up between different commissioners.
[00:43:26] Kevin Schawinski: And there's a new word going around Brussels that should make our ears perk up. And that's tech sovereignty. Um, I don't know what's coming there. But those of us who are interested in AI and are using AI should pay close attention to what happens there.
[00:43:41] Krishna Gade: Awesome. So then, like, I think the most important question to probably end this conversation is, what is the timeline for complying with AIX regulation, and how should companies prioritize their compliance efforts in the short and long term?
[00:43:55] Kevin Schawinski: So if we're going straight by the EU AI Act, you have, uh, until the end of January next year to get rid of prohibited systems and train your workforce. So you have to demonstrate that your workforce is trained, uh, to use AI as required for their job. The, uh, in ten months, we have the deadline for the LLM providers to tell us what's in them.
[00:44:17] Kevin Schawinski: And we have twenty two months for all high risk systems. to have completed the process of being certified to be on the EU market. So that is, I think, the hardest deadline. Um, from the market perspective, I see much shorter deadlines from these tech companies now saying, we're not going to buy from you unless you're certified.
[00:44:42] Kevin Schawinski: So they're not waiting two years. They want to make sure their supply chain is in order much, much sooner than that.
[00:44:50] Krishna Gade: Got it, got it. So, so essentially the time is now, um, you know, um, to, to operate. Cool. Uh, I guess that's basically our session for, for today. Uh, if you have any closing comments, uh, Kevin, um, you know, to sort of wrap up this conversation, uh, please do.
[00:45:07] Krishna Gade: Uh, but, you know, thank you so much for, you know, joining us and giving us these illuminating insights.
[00:45:14] Kevin Schawinski: Uh, it's been a pleasure to be here, uh, and to have, uh, uh, discussed this, uh, fascinating topic with you.
[00:45:21] Krishna Gade: Awesome. Thank you so much. Thank you, everybody, for joining this conversation. As Kevin says, I think it's time to get ready and start getting your AI in order and, you know, putting in your governance practices.
[00:45:35] Krishna Gade: Please reach out to us if you have any further questions. Kevin and myself are available on email. Thank you.