GenAI Use Cases and Challenges in Healthcare with Dr. Girish Nadkarni
In this episode of AI Explained, Dr. Girish Nadkarni from Icahn School of Medicine at Mount Sinai.
He discusses the implementation and impact of AI, specifically generative AI, in healthcare. He covers topics such as clinical implementation, risk prediction, the interplay between predictive and generative AI, the importance of governance and ethical considerations in AI deployment, and the future of personalized medicine.
[00:00:00] Krishna Gade: good morning, or good afternoon, wherever you're joining us from. Uh, thank you for joining this AI explained with, uh, Dr. Girish Nadkarni. So Girish, uh, you know, thanks for joining. Uh, just like from, you know, from an like audience perspective, could you just give a brief intro about yourself, you know, what you do at, you know, uh, Mount Sinai and you know, your role in terms of AI.
[00:00:33] Dr. Girish N. Nadkarni: Yeah. Um, uh, absolutely. So, uh, my name is Girish Nadkarni. I'm a clinician, uh, but also an informaticist/AI scientist. I've been working in the field of applied AI for over a decade now. Uh, my specific interests are a few. Um, one, um, is the safe, effective and ethical clinical implementation and scaling of AI across health systems and sort of the best governance strategies to do that.
[00:01:00] Dr. Girish N. Nadkarni: You know, uh, that's one. Um, the second is, uh, uh. Particularly around LLM sort of assurance testing of large language models to make sure that they align with like best clinical practices and recommendations, right.
[00:01:15] Dr. Girish N. Nadkarni: Uh, the third is, uh, sort of the agent take experience of AI as applied to medicine in the terms of things uh, and we've talked about this question at line about things that are, especially for clinical decisions that are sort of non-critical, need to be made to be fast and like are easily reversible, right?
[00:01:36] Dr. Girish N. Nadkarni: So in that role, um, I, uh, I'm the chair of the Windreich Department of AI and Human Health at Mount Sinai, and I also lead a research/clinical institute called Hasso Plattner Institute of Digital Health at Mount Sinai. And I have a role on the health system side of things as well, particularly around governance, um, and clinical implementation.
[00:02:00] Dr. Girish N. Nadkarni: Um, so yeah. Um, and, uh. Um, I, uh, have in my other life I've started some companies.
[00:02:07] Krishna Gade: That's great. Yeah. Thank you so much. Uh, you know, it's, it's a, it's very rare to see someone with like, so much cross disciplinary experience and background and, um, and so we are very excited to learn, uh, some of the AI uh, applications in healthcare.
[00:02:22] Krishna Gade: So, you know, I guess like generative AI has gained a lot of traction, right? You know, how is GenAI being used today in clinical settings? Are there any specific examples that you can walk us through, you know? Yeah. You know, like AI power diagnostics or whatnot?
[00:02:36] Dr. Girish N. Nadkarni: Yeah. So, uh, well, so I think let's just talk about this, right?
[00:02:41] Dr. Girish N. Nadkarni: Let's, let's differentiate clinical settings from what actually happens when the patient interaction and patient facing stuff and like all the rest of it, right? All the rest of it is probably like 85% of the whole enterprise, right? Things like billing, coding, um, back office tasks, register creation, et cetera, et cetera, right?
[00:03:07] Dr. Girish N. Nadkarni: So I think there is an impetus on generative AI to, um, uh, um, uh, push a lot of the tasks that initially required human intervention or human labor, especially in the back office field. But then there is also now slowly a trickle down effect into the clinical fields, right?
[00:03:34] Dr. Girish N. Nadkarni: So I'll give you an example. In the back office fields, right? So generative AI at Sinai, we are using it, uh, for automating a lot of back office tasks, like scheduling appointments, right? Bill better billing and coding, better financial management, better insurance plan enrollment. And those things already exist, right? But slowly moving into the clinical field, right?
[00:03:54] Dr. Girish N. Nadkarni: I think where the first big, uh, push will be is, uh removing the back office, his task from the practice of medicine, right. So let, have you been to a pain uh, doctor recently?
[00:04:09] Krishna Gade: Yeah, just like yesterday.
[00:04:11] Dr. Girish N. Nadkarni: Where was it? If I may.
[00:04:12] Krishna Gade: Kaiser, Kaiser in the Bay Area yeah.
[00:04:14] Dr. Girish N. Nadkarni: How, how long was the visit?
[00:04:16] Krishna Gade: It, you know, it took like a, an hour like end-to-end, right?
[00:04:20] Krishna Gade: Like, you know, basically.
[00:04:21] Dr. Girish N. Nadkarni: Yeah. But how much time did the doctor spend?
[00:04:22] Krishna Gade: Well, the, the doctor probably spends like 10, 15 minutes right like.
[00:04:25] Dr. Girish N. Nadkarni: In that 10, 15 minutes, how much time did the doctor spend looking at you versus the computer?
[00:04:30] Krishna Gade: Maybe like less than that right? Like five minutes, 10 minutes. Yeah.
[00:04:34] Dr. Girish N. Nadkarni: Yeah. So I mean, is it fair to say that, you know, in the 15 minutes again of a hour long thing, which should have been ideally like 20 minutes?
[00:04:41] Krishna Gade: Yeah.
[00:04:42] Dr. Girish N. Nadkarni: Uh, the physician spend...
[00:04:45] Krishna Gade: Half of the time. Yeah.
[00:04:46] Dr. Girish N. Nadkarni: Five minutes looking at you,
[00:04:48] Krishna Gade: Right absolutely. Yeah.
[00:04:49] Dr. Girish N. Nadkarni: So I mean, that's where ambient AI and generative AI comes in. Right? So if you think about it, right, uh, you could, you, like you and I are talking now, right? We could, we could have a conversation and you know, ambient AI and now it's a bunch of companies, Abridge, Suki, DAX could like listen to a conversation. Commoditized hardware and uh, uh, uh, um, then, um, generate a billable note for the physician. So doctors sort of become doctors and talk to the patient and make decisions rather than being a data entry clerk, right?
[00:05:23] Dr. Girish N. Nadkarni: Um, so I think that's one area where there's an infection between back office tasks and, uh, um, uh, and, um.
[00:05:33] Dr. Girish N. Nadkarni: Clinical medicine, and I think that's gonna see like broad adoption because physicians are also tired of typing, right? Uh, yeah. You know, uh, and I think even if, uh, there is, uh, not an increase in productivity, there's gonna be increase in satisfaction amongst physicians because they no longer having, uh, they don't, no longer having, uh, uh, uh, um, uh, uh, they no longer have, uh, uh, the, the drudgery of.
[00:06:04] Dr. Girish N. Nadkarni: Still listening to you and then also typing into a note, right? So I would argue that some of their cognitive, uh.
[00:06:12] Krishna Gade: And there are also more important, uh, serious use cases, right? Like risk prediction and, you know, sepsis detection and how is like AI being used in those cases? Like where, you know,
[00:06:21] Dr. Girish N. Nadkarni: I think that's a whole new area, predictive AI right. Which been around longer, in my opinion, than generative AI. So we have more experience with it, and that is where I think, you know, we need, um. To think a little bit, sort of deeply around the risks surrounding it. Right? So yeah, we deploy a lot of AI into clinical care, but I think that like the decision to not deploy something is equally important as a decision to deploy it, right?
[00:06:49] Dr. Girish N. Nadkarni: What do I mean by that? If I am physician, I make a mistake just because of the bandwidth and like the number of patients I see is gonna be limited to like. a patient or five patients the most, right. The algorithm is inaccurate. Who, uh, uh, uh, uh, uh, uh, algorithm is inaccurate then it scales to like tens of thousands of patients.
[00:07:12] Dr. Girish N. Nadkarni: Right? Um, I think that that is where you sort of need assurance, you need monitoring, you need privacy, and you need security. And I really, you need one level about that, which is sort of randomized testing or some sort of AI assurance where you put things through like an A/B test, obviously, uh, with highest levels of patient security, et cetera.
[00:07:34] Dr. Girish N. Nadkarni: And then, uh, you put that into clinical care. Now that being said, at Mount Sinai, we've deployed several of these predictive algorithm for things like worsening of kidney function, prediction of malfunction, prediction of falls, et cetera, et cetera. Now, now, uh, you know, a lot of them are monitored continuously to allow for drift, et cetera, et cetera.
[00:07:55] Dr. Girish N. Nadkarni: And, uh, uh, um, uh, we wanna make sure that those, uh, are anything that sort of makes this patient as not patient facing anything that impacts clinical care as we know it is safe, effective, and ethical right. And we can have a long look on.
[00:08:11] Krishna Gade: So, I mean, of course there's classical ML, as you said, the predictive AI that's been around for a long time. And now there's generative AI, which you talked about some really interesting use cases like ambient experience and whatnot.
[00:08:21] Krishna Gade: How are you seeing, seeing the interplay of these two things, you know, how are they complimenting each other in healthcare applications? Any, any interesting insights there?
[00:08:30] Dr. Girish N. Nadkarni: The 2, 2, 2 major insights, right. Um, so the first is generative AI. Can actually also be used as predictive AI. It could be used as predictive AI in a sense of you know, few short learning or small amounts of fine-tuning. That can actually improve prediction. Right. And it can be used as predictive AI, especially in certain tasks where there's not much tabular data.
[00:09:00] Dr. Girish N. Nadkarni: My tabular data, I mean labs or codes right. And where a lot of the prediction thing is incorporated in notes. . Like examples would be like mental health disorders, right? The example would be initial presentation of a patient to the emergency room because you don't have a lot of like work tabular information and then
[00:09:20] Krishna Gade: Right, right.
[00:09:21] Dr. Girish N. Nadkarni: Mostly notes. Right? Like what happened to you?
[00:09:23] Krishna Gade: Right. Correct.
[00:09:24] Dr. Girish N. Nadkarni: Proof. Um, right. So, so that's one thing, right? That's one thing where generative AI is actually used as predictive AI. And then we, uh, uh, the risk of putting my own harm, we actually have a paper on this in JAMIA where you actually show that the, the, the accuracy, like we did it for emergency room, right? .
[00:09:46] Dr. Girish N. Nadkarni: So the accuracy for uh, uh, uh, uh, um, uh, generative AI when you are, uh, um, just feeding it like 10 examples of emergency room cases can approach predictive AI. I mean, it's still lower, but I mean, I'm assuming if we fed in more than 10 examples it would be better because it scale.
[00:10:06] Dr. Girish N. Nadkarni: It can actually give you a reason reasoning about why it's making the prediction, which traditional ML cannot, right, like traditional ML, if you think about...
[00:10:15] Krishna Gade: You have to retrofit explainability and whatnot, but in this case, LLMs can potentially give a reason out. Yeah. Yeah.
[00:10:21] Dr. Girish N. Nadkarni: That's one. The second thing is when you can work things in concert, right? And that's when you can think about potential agents in medicine, right? So you have a predictive approach that you know, I'm just gonna give you a clinical example, right?
[00:10:36] Krishna Gade: Sure.
[00:10:37] Dr. Girish N. Nadkarni: In the hospital, who are the five, who are the 10% of people at most at risk of a fall? I'm just giving you an example, right?
[00:10:46] Dr. Girish N. Nadkarni: So, you know, you, you can have a standard, say a random forest or some other tree-based algorithm uh, do that and it can say it has an accuracy of like 95% and it can, I had a predictive probability, uh, predictive, uh, uh, sorry, positive predictive value of like 70%. So it, 10 patients, the positive predict will be set differently.
[00:11:08] Dr. Girish N. Nadkarni: So instead of like having a physician or a nurse go and examine those 10 patients, then automatically put a alert in their chart, right, saying like, you know, using a generative AI like saying a note that this patient has been, uh, uh flagged that high risk of a fall, right? Yeah. You know, next to the patient you to be, be careful or something, right?
[00:11:27] Dr. Girish N. Nadkarni: So that's an example of where predictive and generative way I can operate almost.
[00:11:32] Krishna Gade: You can take the predictive outputs and summarize it for, you know, and, and for, you know, uh, you know, doctors and you know nurses. That's amazing. Well, one of the biggest concerns with gen AI is the whole concept of hallucinations, or what they call faithfulness or in a groundedness, right? How do you, how do you ensure, how do we ensure these AI outputs are accurate and clinically reliable?
[00:11:55] Dr. Girish N. Nadkarni: So, I mean, you know, you are much more technical than me, Krishna, right? You tell me whether the hallucinations can ever go away.
[00:12:00] Krishna Gade: I mean, see, I think, um, models can never be perfect and that's what we've learned even through the classical ml.
[00:12:07] Krishna Gade: So I think there will always be, uh, issues with it. It's just a, it's just, I think it's like more like a creative feature in the, in the whole, the way, the way the LLMs work. Right. So I don't think that they will ever go away. But you, I think there are more and more guardrails and more and more safety checks people are putting in to ensure that is, that's not happening.
[00:12:25] Dr. Girish N. Nadkarni: Exactly. So, um, um, okay. So a few things, right? So like, so we agree that they will never completely go away, right? Because almost a feature, not a bug.
[00:12:35] Krishna Gade: Correct.
[00:12:36] Dr. Girish N. Nadkarni: It's a feature of the creativity.
[00:12:37] Krishna Gade: Correct.
[00:12:38] Dr. Girish N. Nadkarni: So you can put guardrails around it in the form of like, you know, not hallucinate about certain things, right?
[00:12:47] Dr. Girish N. Nadkarni: You can improve hallucinations with RAG, basically like, uh, ragged to a vector database of institutional policies or medical knowledge, uh, uh, or uh, uh, uh, what have you. Now we have some work coming around that, that I would happy to share at some point where you, even if you do all of these things, you know, you drag it and these are actually real world cases, right?
[00:13:12] Dr. Girish N. Nadkarni: You basically the, the experiment was this, you basically take a thousand real world cases and you just fabricate a disease name or fabricate a lab value out of this. And then you watch how the model, uh, you know, response with hallis, further hallucination based upon bad data, which just happens, right?
[00:13:31] Dr. Girish N. Nadkarni: Data corruption, et cetera. Um. So even if you ragged it, the hallucination rates goes down, but it still stays fixed below a certain amount. Right. So that is why I think figuring out a risk-based approach of what to automate and what needs to be oversight is gonna be critical because, you know. If you hallucinate like a, like a for a noncritical decision, right?
[00:13:55] Dr. Girish N. Nadkarni: You know, patient needs to go from bed A to bed B. You know, it's like if you realize it's wrong after two hours, doesn't really matter. Right? But if if you hallucinate that the patient needs surgery, that's a big deal, right? That's why I think a risk-based approach, like. Focus on this three parameters of like how critical is a decision that needs to be made, how reversible it is, and how fast it needs to be made actually helps with the AI governance, right?
[00:14:22] Dr. Girish N. Nadkarni: So I think, you know, if we agree that hallucinations are never gonna go away, but we also agree that, you know, the benefit of using generative AI in healthcare is, is important. Then we need to like both put guardrails, technical guardrails, but also governance guardrails.
[00:14:37] Krishna Gade: Correct. Yeah. So you touched upon, uh, you know, like this whole thing about continuous risk evaluation in some ways.
[00:14:43] Krishna Gade: You know, essentially you're scoring these, uh, general apps. So, and then you touched upon this governance aspect. Could you shed some light on like the governance processes that happen around AI, you know, like for example, in your organization or in general in healthcare space, you know?
[00:14:57] Dr. Girish N. Nadkarni: Yeah, absolutely. So that, uh, um, governance in AI is evolving field right now, I'll say that. So I don't think I have all of the answers. . But I have thoughts and some principles, right? . So the first thing I think governance, when you're setting governance policies, you have to be, make sure that you have a diverse range of perspectives on board and in healthcare that means, you know, people who provide the care, um, um, like nurses, physicians, MAs.
[00:15:27] Dr. Girish N. Nadkarni: Um, people who enable the care to be provided, which ensures like takes uh, the back office and people to whom the care is provided to basically the patient. So you need all of those three perspectives on board 'cause they have different perspectives on everything. And then you can have like a almost a review panel when you can have a majority ward for anything to pass through.
[00:15:46] Dr. Girish N. Nadkarni: But even before the things hit that review panel, there needs to be some decision points or rules, right? First easy thing is like, is it back office or is it patient facing? If it's back office, then I would argue, yeah, you need some rigor, but you need a lower level of like rigor than if it's actually impact clinical care.
[00:16:04] Dr. Girish N. Nadkarni: Right? And for if things that impact clinical care, right? Um, you need to figure out if it's safe, which requires a lot of like validation testing and assurance testing. Is it effective? Which ideally requires some sort of A/B testing and which C3 is ethical, which basically requires like monitoring for bias.
[00:16:23] Krishna Gade: Yeah, absolutely. And so like, so what should be the best practices then? You know, let's say, you know, lots of healthcare companies, we are working with a few, uh, they're trying to, you know, integrate genAI into their workflows. Now, how should they go about doing this in a responsible manner? You know, what are some of the best practices do you recommend?
[00:16:40] Dr. Girish N. Nadkarni: Uh, ag, again, like this honor. Um, um, these are my recommendations, right? I just say that not my institution's recommendations, you know, because I'm one part of the whole purpose, right? So I think like, first thing that the best practice is like clarity of purpose, right? What problem are you actually trying to solve?
[00:16:59] Dr. Girish N. Nadkarni: I mean, there is a wish right now to appear cool or like to appear like that. We are doing stuff around AI and we are like this, but like, okay, and that sounds fine, but if you're actually implementing AI without a clear organizational strategy or a clear. Solving of a clinical or operational challenge, right? Like prioritizing emergency rooms, clinical decision report, patient flow optimization, where generative AI has been shown to concretely add value, right?
[00:17:30] Dr. Girish N. Nadkarni: The second thing is, you know, a risk-based approach like we just talked about, right? What based upon the metrics or rubrics of um um, um, reversibility, um, criticality and, uh, uh, uh, how fast you make to make the decision. What's like the level of oversight that you need, right? Like, you know, for everything, every, like if you are a higher criticality, but also lower reversibility, then you need physicians, maybe two physicians to look at it, right? Then, so you need like clear oversight through well-defined escalation protocols.
[00:18:03] Dr. Girish N. Nadkarni: And the third thing...
[00:18:05] Krishna Gade: So reversibility from an AI decision or like reversibility of human interpretation of that decision. Like what, what is that
[00:18:10] Dr. Girish N. Nadkarni: Reversibility of the decision themselves
[00:18:12] Krishna Gade: Reversibility of decision themselves. Okay, okay.
[00:18:14] Dr. Girish N. Nadkarni: Like, like is the this a door that you cannot reverse, right, for example. If you do a wrong surgery on a patient that you can not reverse but if you send a patient from an emergency room to the floor, you can easily reverse that. Mean, there's bunch of examples around that.
[00:18:28] Dr. Girish N. Nadkarni: And then, um, um, um, the, the, the third thing is training. Right, right now, um, you know, it's, it's, it's not a new field, but it's also a field that most people who make these decisions are not custom or not used to right. So I think that needs to be sort of large scale training across health systems and sort of training to tailor to what your job is right now, but how your job could evolve based upon this right. And that training is gonna be different for providers. It's gonna be different for physician, uh, nurses. It's gonna be different for, uh, uh, other people. Right.
[00:19:10] Dr. Girish N. Nadkarni: And then finally sort of build, uh, some sort of a feedback mechanism. And that's a bigger conversation that we can have, right? Yeah. So the feedback mechanism is, can involve, again, on a risk-based basis, things from clinical trials to long term everything should be monitored long term. Sorry. Um, but, you know, basically continually measure outcomes, refine the models, get human feedback, and ensure that any errors are identified and corrected basically.
[00:19:38] Krishna Gade: Right, right. So, yeah, so this comes back to like assessing the risks upfront. You know, which use case you wanna apply it or not, and then evaluating it and properly setting up all the continuous oversight and monitoring, uh, to, to make sure it works. That's great. Now switching gears a little bit. Right. So. AI, you know, is, is seems to be helping quite a bit in this, you know, discovering new antibiotics and assisting in drug discovery.
[00:20:03] Krishna Gade: How do you see like genAI shaping the future of like, say precision medicine and, you know, treatment innovations?
[00:20:09] Dr. Girish N. Nadkarni: So that's a great question, right? And I think that it's sort of a full stack innovation for everything, right? . Um, like if you think about what precision medicine is, right? Is it's proactive rather than reactive. It's personalized rather than public. And it's predictive rather than reductive, right?
[00:20:30] Dr. Girish N. Nadkarni: So if take those three things right so you can know which patients are going to get sick or which patients are gonna have an incidence of disease based upon a combination of several data points, starting off with their biological data, which genome expose, but, uh, sorry, genome, um, uh, proteome, et cetera.
[00:20:53] Dr. Girish N. Nadkarni: But also like the clinical data, their environmental data, right? So that's the predictive part of it. So. that's the, you can't just do, and then you proactively try to prevent that disease from happening by personalizing therapy, right? It's not a fast stretch to think about two things that are happening concurrently. Right.
[00:21:16] Dr. Girish N. Nadkarni: One is the, the increasing personalization in biotech with like mRNA and, uh, gene therapies and even gene editing. But also, um, this AI development of faster approaches to do these things, right? So in the future, and again, it's not possible, it's, it's definitely possible to have like a Coke machine, right?
[00:21:39] Dr. Girish N. Nadkarni: Basically you enter in a person's information and you press whether you want like regular Coke or Diet Coke or vanilla, which vanilla Coke is terrible by the way uh, Coke. And you get a drug or a molecule personalized that individual patient, right? I mean, it sounds science fiction, but if you think about the component parts of it, right, it's not right, right.
[00:22:01] Dr. Girish N. Nadkarni: You need, uh, you need, uh, you need accurate prediction. This person's gonna get sick because you don't wanna give medications to someone. . But then you can have tailored like tailored drugs to particular genes that like decrease your risk of cardiovascular disease, et cetera, et cetera.
[00:22:23] Dr. Girish N. Nadkarni: And then you need a delivery mechanism. And a lot of these delivery mechanisms are becoming oral, right? So you can have, uh, mRNA medication in an oral form that's specifically made for you. You tell me that this is science fiction. It's not because having personalized medicines created. For rare diseases like you know, with names, the patient on, and these are, these are, these are severe diseases, right.
[00:22:47] Dr. Girish N. Nadkarni: For example, a girl had like spinal muscular dystrophy due to a mutation in a gene. A drug was created specifically for her, and she does well now. Right. I'm just telling you that this is possible to scale, right? With AI, because you can have, you can screen lots of combinations quickly and you can rapidly generate any trade novel molecular design for a particular patient.
[00:23:08] Krishna Gade: That's amazing. So what you're saying is personalized, completely personalized medicine for some rare diseases based on your DNA, your sort of molecular structure, you can personally
[00:23:19] Dr. Girish N. Nadkarni: Not just rare, I would argue, common diseases as well.
[00:23:21] Krishna Gade: Common diseases as well and personalized to you.
[00:23:23] Dr. Girish N. Nadkarni: Trying to prevent diseases. Right. Personalized medications for preventing diseases.
[00:23:27] Krishna Gade: Preventing decisions.
[00:23:29] Dr. Girish N. Nadkarni: Yeah, like, you know, I right now, like the healthcare system in the US is not really healthcare, right? It's healthcare because it waits for you to get sick and then takes, but if you had a, a way of predicting with perfect accuracy or with like not a perfect accuracy, with like reasonable, and then you had a personalized way of preventing it, wouldn't that be cool?
[00:23:51] Krishna Gade: Yeah, that would be amazing actually. You know, I studied a bit of bioinformatics in my grad school. This is all very interesting. We studied all the protein structured predictions and all. It seems like genAI can automate a lot of these things and make, make them a lot better actually.
[00:24:04] Dr. Girish N. Nadkarni: It can make a, you know, you know, I, you know, you can, you know, say that science fiction, but I, I anticipate that the Coke machine, and again, I, uh, the Coke machine analogy is not mine, right. It's my, uh, partner Alex Charney. Coming sooner rather than later because it's gonna happen. Uh uh, and it might start off with like rare genetic diseases, but it might end up with common complex diseases.
[00:24:29] Krishna Gade: That's amazing. That's amazing. And, and so basically when you sort of, uh, think about genAI across like healthcare, you know, what are some of like, let's say three promising use cases that you think in the, let's say in the next 12 to 18 months, that you would see massive, you know, traction happening.
[00:24:47] Dr. Girish N. Nadkarni: So first thing I, uh, uh, three things and this, this is not, uh, um, um, specifically clinical, right? Yeah. The three things are massive traction happening is like clinical summarization and documentation. And this includes, uh, the areas that we just talk about, about ambient AI and uh, right, but it also includes like mundane things like.
[00:25:09] Dr. Girish N. Nadkarni: Patient registry creation and like extraction of data from clinical records for sending to large quality improvement initiatives across the country, right? So that's why the second thing is that I think in diagnostic imaging, uh, like diagnostic, not just imaging, right? Things like novel bio prognostics, uh, for, uh, predicting disease, right?
[00:25:33] Dr. Girish N. Nadkarni: AI for like detecting stuff on radiology scans, right? Uh, um, so those are the things that I think will come out fast and furious years. Better ways of disease with the eventual goal of preventing them. . And the third thing is a little sort of left field, right? Um, I think huge inpatient engagement and education and empowerment, right?
[00:25:54] Dr. Girish N. Nadkarni: I'll give you a simple example. So simple that we should have thought of it like years back, right? . Um, like Brown University. So anytime you go to a doctor, you have to sign a form, a consent form. Right? Like, I'm fine with this. Okay. Yeah. And those consent forms are like full of jargon.
[00:26:11] Dr. Girish N. Nadkarni: I mean, I, I couldn't understand them as a trained physician. I want people, like the lower literacy laws gonna show, right. Who also already did a great thing. Right. Can you take this consent form and translate it without losing any of the information into a sixth grade reading level?
[00:26:26] Dr. Girish N. Nadkarni: And you know, it happened, uh, and they. Did it across the health system and patients love it because now they can actually understand stuff. Right? Yeah. There's a huge thing in making like complex medical com concepts. More clar. More clarifier, right? Yeah. So
[00:26:41] Krishna Gade: Yeah, I think that's great. I think, I think the third thing that you mentioned, it's like actually close to my heart so the accessibility of medicine, right? And I remember when we both met at the Responsible AI Conference in New York, you were very passionate about how AI could transform medicine across the globe. You know, for. For like, people who don't have access to like great, great doctors and hospitals. Can you, what, what, what are your thoughts?
[00:27:05] Krishna Gade: You know, how are you, you know, like, can you share some of that with our audience, you know?
[00:27:09] Dr. Girish N. Nadkarni: Yeah. So let's, let's have a conversation about that. Right. Um, so, um, medicine is sort of human expertise encoded. The in heuristic form, right? Like you take in inputs and then you have outputs, right? And the problem with medicine is like expertise, until now, any quote unquote intelligence was limited, right?
[00:27:29] Dr. Girish N. Nadkarni: And as a result of that, you know, you couldn't, um, scale this. Forget about worldwide. You couldn't really scale this to a system. And that's the root of all access issues, right? If I, today I were to go and try to find a primary care provider for a cold, I would guarantee you I can't find one today.
[00:27:46] Dr. Girish N. Nadkarni: Um, I mean, I can find one tomorrow or day after, right? Because there's limited resources. Right. But if you theoretically think about the fact that it's encoded knowledge and you can scale it across the globe, then it's a problem of scale, right? Which we've solved before. We, as in tech has, uh, like tech industry has solved before, right?
[00:28:08] Dr. Girish N. Nadkarni: Because it's a matter of scale and just like blitzscaling, right? . It's a little more dangerous because, uh, you know, it affects patients. So that needs to be based guardrails around it. Yeah. But at the same time, I would also think that a bunch of patients are already putting their data into ChatGPT or Google Gemini, or similar, right?
[00:28:28] Dr. Girish N. Nadkarni: So people are already doing it. I think we just need to make it better and put more guardrails around it and like link it with via RAG to some sort of verifiable medical knowledge, right? Because this could be huge, right? Like for example, I'll give you a simple example, right? Um. Um, uh, where I grew up in India, right?
[00:28:47] Dr. Girish N. Nadkarni: Uh, TB, Tuberculosis is extremely prominent, right? Um, and you know, the, the standard workflow was, you know, for diagnosis you had to get like, uh, three tests, et cetera, et cetera. Now there are things that, that can have diagnosed TB with like a hundred percent accuracy in five minutes. That completely changes the workflow, right?
[00:29:07] Dr. Girish N. Nadkarni: Because now you can go from presentation to not even a doctor, right? Someone who's, uh, sort of paramedical, right? Like someone with some training, knowing how to recognize the bad stuff accompanied by AI agent, right? . You could get to be diagnosed and you could get the first dose of treatment all in all under like 15 minutes, right?
[00:29:30] Dr. Girish N. Nadkarni: Huge workflow issue. Right? So I think that, I think. The mass adoption of AI into past clinical healthcare over and beyond what's happening in the back office tasks is going to be, uh, uh, in starting off in the low and middle income countries, right? But there's a danger there because by definition, low and middle income countries won't necessarily have the same.
[00:30:01] Dr. Girish N. Nadkarni: Rigor of regulations that the U.S. Does. So, which not and cannot take advantage of that.
[00:30:07] Krishna Gade: That's right. So, but do you see the world of virtual agents? Like in terms of like, oh yeah.
[00:30:12] Dr. Girish N. Nadkarni: Absolutely. Already happening? Yeah. Already happening. Yeah,
[00:30:13] Krishna Gade: Already happening.
[00:30:14] Dr. Girish N. Nadkarni: I think it just needs to happen in a more rigorous and more reproducible and a more, um, dare I say, ethical fashion.
[00:30:21] Dr. Girish N. Nadkarni: But it's already, I mean, you know. The question of whether it's happening or not is done. It's how to be happening. The question is how do we make sure that it's safe and effect?
[00:30:32] Krishna Gade: That's right. So I think as a related audience question on that, you know, what are those particular clinical, clinical scenarios where genAI should not be used or too risky?
[00:30:41] Krishna Gade: So this is kind of a counter question.
[00:30:43] Dr. Girish N. Nadkarni: Well, that's a really good question. Right. Um, um, I don't think it should be used when there's a question of capacity. And by capacity I mean it in a specific medical and a legal sense. Capacity basically means that you cannot, because of severe mental or physical issues, um, you cannot, don't have the capability to take your own decisions, right?
[00:31:11] Dr. Girish N. Nadkarni: And there it should not be used because, uh, of. The fact that, you know, patients don't really have the ability to distinguish right from wrong, right? And that's, I don't mean to sound paternalistic, but the specific medicolegal definition of capacity, which says that, you know, then you need like two or three physicians or like ethics board to come in.
[00:31:32] Dr. Girish N. Nadkarni: So I think that's one example, um, of where it should not be used. The second one is, uh, well. This is starting off with nothing. Nothing should be used unless it's safe, effective, and ethical. Right? Yeah. But like, you know, where it absolutely should not be used on, on the pediatric and the child health fund.
[00:31:52] Dr. Girish N. Nadkarni: I'm a little conflicted, right? I don't know the answer to that, but that's something which has, um. Uh, which has tricky, right? Because of consent and because of, uh, issues, right? And that's why even if you see it in the current marketplace, uh, in the current, uh, world, uh, um, you know, um, uh, uh, it's not being used particularly in not a lot of products in kids right now.
[00:32:22] Dr. Girish N. Nadkarni: Right. Which is an issue bio in of itself, right? Um, and the third thing is sort of specific. Protected populations, right? . That the, those are definitions, right? Like prisoners and like, uh, um, you know, people, uh, with like significant, um, medical and or mental health issues. So those are, that, that, that's tricky ethical situation. That's a really good question. I need to think a little bit more about.
[00:32:48] Krishna Gade: Yeah. I mean there's this whole sort of, uh. I, I don't know if it's a dystopian scenario, but like, you know, you think about like Elon Musk's, Optimus Robot, right? Like that is, you know, running, you know, walking uphills, you know, catching tennis balls.
[00:33:00] Krishna Gade: Now if you think about like generative AI systems connected to robotics now, do you see a world where, you know, AI driven robots could outperform human surgeons are, you know, the sort of, you know, pretty high risk, uh, situation?
[00:33:14] Dr. Girish N. Nadkarni: Well, so, so, so, uh, uh, uh, um, so. Yes. No, the question is when, uh, uh, uh, um, the question is when and, um, I don't know the answer.
[00:33:33] Dr. Girish N. Nadkarni: Could, I mean, if you talk to different people, right. Again, yeah. Because the understanding, the real world. Is harder than, much, much harder than understanding language, right? . That's why, um, the hardware is there, but the world models are, and the LLMs are not particularly there, right? . Uh, um, uh, you know, um, um, um, I think it's not going to be easy to do this. Um.
[00:34:08] Krishna Gade: It's too ambiguous and you need more,
[00:34:10] Dr. Girish N. Nadkarni: I, I don't know, I honestly don't know answer.
[00:34:12] Krishna Gade: Yeah, yeah. Maybe, maybe in a hundred years.
[00:34:15] Dr. Girish N. Nadkarni: I think a bit sooner than that, but like, I don't know if like, it'll require the, the development of like robust world models, right? Because understanding the physical world is much harder and understanding the digital world, right?
[00:34:28] Dr. Girish N. Nadkarni: So it requires development of robust world models.
[00:34:31] Krishna Gade: Right. Right, right. Makes sense. So, and then there's, uh, something related to the, uh, so whole sort of, uh, administrative efficiency, right. Which we, you should talked about at length. Uh, but you know, there's an audience question as well on that. You know, there are some people who absorb information better when taking notes or writing.
[00:34:49] Krishna Gade: You know, AI is taking notes. You know, how will we include these, those doctor styles, right? Like, do you see a world that that's gonna be a hindrance or it's a, it's, it's kind of a template of everything.
[00:34:58] Dr. Girish N. Nadkarni: I mean, they, they wanna like notes, type notes, all right? They can do that. And then you can just add in to the then addendums of notes to the end, right? I mean, like, I would argue that lots of doctors want to have a conversation.
[00:35:11] Krishna Gade: Yeah. Makes sense. Makes sense. So I guess, you know. Finally, you know, where do you see the biggest bottlenecks in scaling gen AI solutions in, in, in, in healthcare? You know, is it coming from, uh, let's say like we talked about the reliability, accuracy, this whole, uh, risk issues.
[00:35:29] Krishna Gade: Uh, is it coming from pro, you know, governance side of things, process side of things, you know, where, where, where are the bottlenecks?
[00:35:38] Dr. Girish N. Nadkarni: Lots of bottlenecks, right? I mean, uh, there is a standing maxim that, um. Uh, uh, uh, um, uh, uh, uh, that, that is standing maxim that, uh, um, workflow eats technology for breakfast.
[00:35:58] Dr. Girish N. Nadkarni: So I think if, okay. Do we agree that AI in its current form is a massive. Transformational technology probably. Right? I mean, you found a company around it, so you've probably, so, but any transformational technology takes time because there needs the, the technology term might be transformational, but society needs to transform around it, right?
[00:36:22] Dr. Girish N. Nadkarni: For example, I'll give you a clear example, right? Electricity. The became invented, forget what invented, became cheaply, easy to produce cheaply, but the time it took for electricity to replace steam was approximately 50 to 60 years, right? . That's because, uh, um, you know, factories had to be reconfigured for electricity.
[00:36:44] Dr. Girish N. Nadkarni: Um, people had to be trained, infrastructure had to be laid. All of those things, and it society had to evolve, right?
[00:36:52] Krishna Gade: Yeah. Be comfortable with it and trust it. Yeah.
[00:36:56] Dr. Girish N. Nadkarni: Now the, and here I would like to hear your opinion, right? Yeah. Just in thinking about healthcare, right? Yeah. Um, right now, hospitals and health systems physically, logistically, organizationally, are configured in a certain way, right?
[00:37:14] Dr. Girish N. Nadkarni: Um. To reconfigure them is gonna take time, effort, and energy.
[00:37:19] Krishna Gade: Yeah.
[00:37:20] Dr. Girish N. Nadkarni: And if to reconfigure them early, then the tipping point is gonna be a massive financial risk. Yeah. Yeah. In this case. And tell you what you think, right? Yeah. Being early is almost the same as being wrong.
[00:37:31] Krishna Gade: Yeah, absolutely. I think, you know, and in, in this why in many industries and healthcare is obviously under industry where.
[00:37:37] Krishna Gade: What we are seeing is AI is being used to drive more efficiencies, right? Like you talked about things like, you know, assessing readmission risk or you know, getting an alert if this patient is, you know, likely to fall or something. Or like maybe like a sepsis detection system that that can give you an early warning or you know, you know, the whole ambient experience you talked about where even if you are wrong.
[00:37:59] Krishna Gade: It's not like gonna be so bad, like the false positives are not bad, but, but getting that insight quickly can save a lot of time and energy and human resources, you know, being spent on those problems. I think that's where we are seeing like the early adoption happen, of course, like having an AI robot doing this ophthalm, ophthalm, uh, like eye surgery that's probably like, uh, tens of years away, right?
[00:38:22] Krishna Gade: Like, you know, uh,
[00:38:23] Dr. Girish N. Nadkarni: I think society society broadly, but also specifically healthcare. Society needs to transform around it, right? Yeah. We agree that the technology, transformational, it just needs the remaining world to conform around it, which will happen. Trust me.
[00:38:40] Krishna Gade: Yeah, absolutely. I think the personalized medicine that you talked about is very, very interesting, right?
[00:38:44] Krishna Gade: And that's both innovative and could we get game changing actually? Uh, so I guess, you know, we are gonna wrap up this conversation, you know, two, three minutes now. If you had one final takeaway for a healthcare leader like you, you know, trying to drive genAI in their organization, what would it be?
[00:39:00] Dr. Girish N. Nadkarni: I mean this time this might soundy and like, uh, cheesy, but keep the patient at the center right.
[00:39:06] Dr. Girish N. Nadkarni: Do what's best for the patient. If your focus and your effort and your energy and your purpose starts from that, then even if you're wrong, you'll probably be right.
[00:39:17] Krishna Gade: Yeah, absolutely. Yeah. Well, thank you so much, Girish. That's a great way to end the, uh, conversation today. I. And thanks for sharing your valuable insights and time with us.
[00:39:28] Krishna Gade: Um, uh, that's it for this, you know, this week's AI explain. Uh, thank you so much for everybody for joining us, and, uh, we'll see you in another session.
[00:39:39] Dr. Girish N. Nadkarni: Thank you. Thank you Krishna. And thank you everyone to the, uh, for having me on. It was, it was a blast.
[00:39:43] Krishna Gade: Awesome. Thank you. Bye-bye.