Broadcast Retirement Network’s Jeffrey Snyder discusses how artificial intelligence is changing the relationship with your physician with bioethicist and pediatrician John Lantos, MD.
Jeffrey Snyder, Broadcast Retirement Network
Joining me now, Dr. John Lantos. Dr. Lantos, so great to see you. Thanks for joining us this morning.
Pleasure to be here. I am really excited to talk to you about this. And as I was saying in the virtual green room, AI has creeped its way into everything.
I’m not saying it’s good or bad. I’m sure you may have some opinions. But my first question to you is, how has AI been integrated in healthcare?
And has it gotten in the way of the relationship between the primary caregiver, the doctor, and their patient?
John Lantos, MD, Bioethicist and Pediatrician
There’s almost two distinct generations of AI in healthcare. One is before the more recent large language models, CHAT-GPT, CLAWD, all those, where a more primitive version of AI was being used to read cardiograms, interpret radiographs, help with electronic health records, give reminders for doctors to follow clinical practice guidelines and those sorts of things. So in that sense, AI of one sort or another has been around for 30 or 40 years.
But these last couple of years have seen the exponential growth of the more interactive, human-like large language models. And the only accurate answer to that question is nobody knows how much it has permeated medicine because it’s available to everybody on their phones. So patients are clearly using it.
Doctors are clearly using it. Some doctors are advocating for more use. There’s no regulation.
It’s not an FDA-approved product. So we just don’t know.
Jeffrey Snyder, Broadcast Retirement Network
You mentioned that, just to paraphrase, AI in some way, shape or form has been around for 30 or 40 years, assisting doctors. Google has been around for a long time. A lot of patients go into Google and kind of do the same thing and say, what is this?
And I’m sure doctors, they probably don’t like that as much because they want to do the diagnosis. How important is the relationship between the human doctor and their patient? I would argue they’re the ones that went through medical school, very rigorous, and they’ve been working for years and years.
It just seems like it’s a pretty important relationship to maintain.
John Lantos, MD, Bioethicist and Pediatrician
So one of the fascinating things to me about what the large language models are doing is they’re allowing us to ask that question in a way that we never could before. You’re right. For the last 25 years, doctors have disparaged patients who would go to Dr. Google and come in and say, I think this mole is a melanoma. I pulled up an article and it said blah, blah, blah. That world is gone. The other way that people use social media, digital media, rather than AI in particular, was through connecting with each other in groups of people with unique problems or particularly rare diseases.
So in my experience, even 20 years ago, people with rare genetic syndromes would find each other around the world and create Facebook groups or wikis or all sorts of things. The big difference now though, and the reason your question is pertinent now, is that it’s not as one way tapping of information. When you do a Google search, you put in a query, it generates some stuff that may or may not be relevant.
You can’t question it. You can’t have a conversation. You can’t push the issue.
Whereas now you can get an answer and you can say, wait, I don’t understand that. Can you explain the part about my electrocardiogram a little better? Explain it the way you would to an eighth grader.
Ah, what does that mean? And the kinds of conversations that used to be unique to a human interaction in the examining room now can take place online.
Jeffrey Snyder, Broadcast Retirement Network
You mentioned that maybe some doctors are advocates for this type of technology. Others are not. Let’s talk about the American Medical Association.
Do they have, and I’m not saying you’re a representative, but do they have a position? Do most doctors at least agree that we need guardrails? That you have to have guardrails.
Sure, use the technology, but we need to come up with some uniform standards across the industry. Much like any industry, manufacturing, financial services, got to have some guardrails to protect the doctors, but also the consumers. So doctor.
John Lantos, MD, Bioethicist and Pediatrician
So to say that we should have guardrails is I think a widely held, almost universally held opinion. It masks disagreements. So it begs the question, which guardrails?
What should the regulations say? Things like privacy or the low hanging fruit. Of course, these should not be publicly available if you’re disclosing personal health information.
But the more tricky problem is figuring out a way to evaluate the accuracy of a particular large language model or generative artificial intelligence. Because the nature of the product is that it’s constantly changing and learning with every bit of new information that comes along. So you can evaluate yesterday’s large language model, but it won’t give you the same answer as today’s.
So thinking about regulation in the traditional way, is this product safe and effective? Just isn’t going to be applicable here.
Jeffrey Snyder, Broadcast Retirement Network
Yeah, I mean, I think you make a lot of sense. I’m just trying to think about, as you were talking about that, I mean, do you actually have AI to police AI, but then you kind of get into this whole, you know, this whole thing where how can you trust the AI that’s evaluating the AI? I mean, you know, we’re kind of at a, I wouldn’t say, I don’t want to be all doomsday.
I don’t think that that’s helpful. I mean, certainly technology is helpful, but maybe making the consumer doctor a better arbiter of what they should and shouldn’t think about when it comes to especially medical decisions.
John Lantos, MD, Bioethicist and Pediatrician
I mean, what the AMA is proposing and what I think a lot of leaders in the field are talking about now is developing new kinds of hybrid methods of delivering care where doctors and patients both collaborate with AI in a way that might resemble the way that we would collaborate with online searching before. If you were my patient and you came to me and said, look, I’m having these four symptoms and I talked to ChatGPT or Claude or whatever about it. And it said, this might be eczema or it might be psoriatic arthritis.
And I’ll say, what other symptoms are you having? And maybe type it into my large language model. We’ll pull up pictures together of something that looks like your rash and say, does it look like this?
Do you have these other symptoms? And the large language models will be like a 24 seven always available consultant that both of us can use that will hopefully lead to better communication and better treatment.
Jeffrey Snyder, Broadcast Retirement Network
Les, I got about a minute left. I guess my last question is, I mean, you don’t ever envision a point where we have like on the Starship, what was it, Voyager where they had the doctor or maybe it was, yeah, maybe it was the Voyager, Star Trek Voyager where they had the doctor that was really like an avatar. And he forget the act.
Oh, it was Robert Picardo. I think his name is. And he would portray the doctor and the doctor could treat almost any malady.
Are doctors gonna go away or is it really, what we’re talking about here is more just integration.
John Lantos, MD, Bioethicist and Pediatrician
Hard to say at this point. Yeah, I think. Well, let me give you an example from outside of medicine.
Jack Clark, who’s the co-founder of Anthropic was recently interviewed by Ezra Klein on a New York Times podcast about how large language models might affect job creation or job growth. Clark said Anthropic is no longer hiring coders. It used to be their main, a big chunk of their workforce.
They don’t need it anymore because AI does it better. So all the basic coding is now done by machines. They are still hiring.
They’re hiring people who can look at the product of the machines and evaluate A, whether they’re high enough quality and B, to think about what they could be most useful for or how they could be combined. Something similar, I think, is likely to happen in medicine where a lot of the tasks that doctors once were most valued for will be done better by machines. And when that happens, the only thing that would keep doctors in the loop would be the political power that we command to say, you got to pay us for the work the machine is doing.
But that makes no sense in terms of quality or outcomes or access.
Jeffrey Snyder, Broadcast Retirement Network
But the thing I rely on for the doctrine, I do have to close out. So maybe 30 seconds on this and maybe it’s not a 30 second. Answer so we’ll have to bring you back.
But there’s ethics. A doctor swears to the Hippocratic Oath or at least they used to, I don’t know if they still do, but they swear to the Hippocratic Oath. A machine doesn’t.
They can have parameters, but they lack that qualitative ability to navigate things like ethics. So wouldn’t the doctor still be there from an ethical point of view?
John Lantos, MD, Bioethicist and Pediatrician
As with all your other questions, AI enables us to ask questions about that we never used to be able to ask before. So to whom or to what is a large language model committed ethically? Is it to the patient’s wellbeing?
Is it to the doctor’s revenue? Or is it to the revenue of the company that built the AI? Or in the science fiction versions, does it strike out to find its own freedom?
We don’t know. Yeah, yeah.
Jeffrey Snyder, Broadcast Retirement Network
Well, I certainly TBD to be determined. And like I said earlier, we’re going to have to bring you back to talk more about it. Dr. John Lantos, great to see you. Thanks for joining us. Great article. And we look forward to having you back on the program again very soon, sir.
Thank you. I’d love to be back.