Keisuke Nakagawa, MD is director of innovation at UC Davis Health and executive director of the UC Davis Health Cloud Innovation Center. His job is to look to the future and identify technologies and trends UCD should be learning about and piloting to better understand how they might impact patient care and be integrated into the entire health system over the next three to 10 years.
Dr. Nakagawa’s team is currently located in Rancho Cordova but will soon move to the new Aggie Square under construction across from UCD’s hospital. His team is charged with developing new technologies from scratch and a digital innovation group that includes engineers and designers strives to fill the gaps in technology not available from vendors.
Dr. Nagagawa, 42, was looking at a career in cardiothoracic surgery but, while still in medical school, started a company called whitekoat that brought medical education into the cloud. He said his company’s software mapped the learning curve of every single medical condition to assist patients, who only had Google and WebMD as references at the time, in educating themselves about their conditions. He eventually earned his MD at UC Davis in 2018.
We talked to him about the promises and pitfalls of artificial intelligence in medicine, as well as how it may impact the everyday practices of physicians. This interview has been edited for brevity and clarity.
I would say that the big picture is AI is here to stay. AI is not going to replace clinicians, but clinicians who don’t use AI will be replaced by clinicians who do use AI.
And I think that the opportunity is to leverage AI so that we can enhance the human touch and the human connection and medicine more, because I think we lost a lot of that with the advent of electronic medical records, where you are constantly just staring at a computer screen even if the patient’s in the room with you.
I think if you look at what AI is really designed to do, it’s basically pattern recognition and prediction. It’s what it’s extremely good at; it ingests a ton of data and it finds the patterns, and it can be really good at predicting based on the data and the patterns it sees.
If we can actually let the technology focus on the prediction and the data gathering and analysis, then it actually relieves a lot of our cognitive load to focus on the judgment. I think that human judgment is where the magic of patient care is.
AI is not going to replace clinicians, but clinicians who don’t use AI will be replaced by clinicians who do use AI.
I do. The prediction is only going to get better. If you think about it, we’re a tenth of a generation into AI. If you look at the medical field, medical school is four years, residencies are anywhere from three to five with fellowships on top of that. But really each generation takes about, 10, 15 years, and probably at least five or 10 years of practice to really master. So we have a 15-year cycle for each generation of clinicians to reach their fluency.
AI only has had maybe four or five years, and so there’s going to be another generation, like the incoming medical students right now, who are starting their knowledge from scratch.
I think the ability to predict accurately is only going to get better and better. For humans, it’s a refresh every time. I don’t think we need to be scared about something being better at predicting than we do, because we already do that with things like weather. No one tries to predict the weather just by their own observation or own study of weather patterns. We rely on massive amounts of data to give us predictions around whether it’s going to rain, etc.
I see that as a cultural shift in medicine. Obviously, we care a lot more about making the right decision around a human life than whether it’s going to rain or shine tomorrow. But overall, it’s really the same concept. Something that is able to store and process more data often will have a significant edge on predicting better.
I think that the nuance is, a lot of the predictions may not still be the best decision to act on. That is where that human role is going to be even more important.
Well, one, I hope that we are bold enough to experiment around the medical curriculum, because we have to. It’s going to be really challenging, to my earlier point of AI is not going to replace physicians, but physicians who don’t use AI would be replaced by physicians who do. If we don’t infuse more data science, more kinds of statistics and responsible kinds of AI bioethics right into the curriculum, the next generation of physicians will not be prepared to integrate that into their daily practice.
That’s an interesting question because I would argue that while many physicians treat the patient and we all want to think that we are treating the patients, I think we’re increasingly relying on numbers to treat the patient. I don’t think it’s binary; in fact, it’s a spectrum. We’re moving more and more toward treating the numbers as much as we’re treating the patient. So I do think it has to be a hybrid model; the numbers are going to help us eliminate a lot of things that we don’t even need to be considering.
About seven years ago, Google came up with an AI algorithm that can take a retina image and determine if it’s male or female — just from the retina image, something a human can’t do. We all looked at it at the time and we all were, like, what’s the point of that, that’s useless! You can just look at the patient and you know what the gender is. I love that example because on one hand engineers can get a little bit too deep into the technology, and sometimes lose sight of the obvious.
But I also think it’s fascinating that you can determine gender from a retina because a human can’t. We have to recognize that there are signals that we are not able to perceive that a point-of-care ultrasound or retinal scan will be able to. We need to accept and incorporate that into our clinical practice because that can enhance the care that we can deliver. I think we’re transitioning from the Osler model of a physical exam using our hands and senses to get the data that we need. Now I think we’re starting to have the technology and tools where percussing is not as accurate as a point-of-care ultrasound.
So how do we start to let go of some of the practices that were necessary without the technologies we have today and then embrace the technologies that really help us to get even more accurate in our diagnosis? Medical schools are probably going to have to reassess the philosophy of teaching, down to maybe the Hippocratic Oath. That’s probably here to stay, but what are the values, what kind of medical students are we accepting and what are the skills that we need to emphasize? I think all of that needs to be revisited with the advent of AI. I think it’s just going to really advance our practice.
I think every technology has its own unique pitfalls. I would say that for medicine, you hear a lot about hallucinations with AI and people are freaking out about it. I think that is definitely an important pitfall to consider.
Thinking about equity in health care, these algorithms are quite fluid in how they learn, so they can drift in what they’re predicting based on the data that they’re getting fed.
I think hallucination is fundamentally not a bug, but a feature of AI. We are designing AI to be able to be creative, to generate net new content off of the patterns of data it processes. So hallucinating is fundamentally built into the design.
So I think what we have to really think about, is that a pitfall? It likely is in the context of medicine, but it’s not necessarily a pitfall in the context of AI. That’s something that we need to have really important governance around.
We, in medicine, are used to upgrading our knowledge in discrete phases. So if you look at clinical guidelines coming out of ACOG, the USPSTF, they will release guidelines every year, every couple of years. AI doesn’t work that way, so they’re almost continuously improving their predictability as more data is fed to it. That’s a completely different paradigm from what we’re used to.
If something is helping us make clinical decisions, it’s no longer necessarily a human committee reviewing all the publications from the last three years and then coming up with updates. It is a completely different governance system that we need to create to work with the technology and that’s something that we’re not really prepared to do.
Yeah, I think the liability issue is super fascinating and it may not necessarily be a bad thing for physicians because we are the ones that hold lot of the liability and the insurance. That liability may be passed on to a different organization at this point that isn’t the physician anymore.
One recent story was one of the passengers on a Canadian airline was asking a chatbot whether they can get a refund or a significant discount for bereavement and canceling the flight. The chatbot gave incorrect information that was different from what was on the website. So, they went through a whole lawsuit, and in the end, the airline was at fault, and they had to compensate the passenger for not abiding by the policy that was falsely reported by the chatbot that they used. So, I think that already is an example of where the organization that is using the generative AI will likely be liable and responsible for whatever it produces.
Now, the liability could also then transfer over to Microsoft or OpenAI if that’s what I’m using. But I do think that, in any circumstance, the physician is more off the hook than we are now.
It’s just a matter of the technology catching up and the businesses being incentivized or forced to make it more interoperable. That’s been holding us back. Once the AI can see all the data, that’s going to be really game changing. At some point, the knowledge that a single human’s brain has is going to be so tiny relative to this master system that has access to generations of data that it can process all at once.
We really are going to have to rely on, and trust, the AI systems a lot more. I think responsible AI and trust is going be one of the biggest themes of this upcoming year, maybe for many more years. What do those even mean? When it’s “responsible” AI, who’s defining what responsible is? Whose trustworthy definition are we going to use for it to be trustworthy AI?
I think everybody wants those two things as part of the AI picture, but we don’t really still understand what it even means to action that.
Physicians have the insights and expertise to define concrete use cases, the problems we should be solving, and also how to do it in a very patient-centered lens. These are things that clinicians and patients know. The engineers building these technologies want more physicians to work with. But I think that, like you’re saying, a lot of physicians have no idea how to plug themselves in, and a lot of these committees have no idea who those clinicians are who want to be involved. So there’s this fundamental market mismatch, and ultimately I think what happens is physicians aren’t really at the table.
SSVMS is a great group to do more articles on AI and educate clinicians so the technology doesn’t feel unreachable. It’s more like, how is it going to impact your profession today, this upcoming year, versus in three to five years, and here are specific startups or organizations looking for physicians to help them out. I think those are the best ways for physicians to get involved, even today.
The wrong move is to be overly scared and not be at the table.
The wrong move is to be overly scared and not be at the table. You know, it’s okay to be a skeptic in the room. But I think it’s so important that physicians and clinicians are proactively at the table when a lot of these policies are being made at a single institutional level, as well as state and local, as well as federal. I don’t see enough physicians involved in those conversations. There are many different factors, like physicians are really busy with their daily practices, physicians don’t understand technology.
There’s so many reasons why physicians are often a minority in these types of conversations. We really do need to be equally represented in these settings because it’s going to be a fast-paced evolution and we need to find the right folks who represent that clinical lens. I think we need to be more interdisciplinary as well, and physicians are often the best advocates to make sure that patients are at the table.
Dr. Nakagawa was interviewed by SSV Medicine Managing Editor Ken Smith.
Email Keisuke Nakagawa, MD
Email Ken Smith