Bob Wachter's Viral Tweet and Thoughts on AI in Medicine

; Abraham Verghese, MD; Robert M. Wachter, MD


July 25, 2023

This transcript has been edited for clarity.

Eric J Topol, MD: Hello. I'm Eric Topol, and I'm with my co-host, Abraham Verghese. We have the real privilege today to have a conversation with Bob Wachter, chair of the Department of Medicine at UCSF and a friend of both of ours. We were going to talk about AI, but something else has happened. I'll turn it over to you, Bob, to tell us a bit about the viral tweet about your recent experience with COVID.

Robert M. Wachter, MD: I was just trying to see if anybody was still on Twitter, and I guess the answer is yes. Three and a half years into this pandemic, I was still a NOVID, which I thought was a combination of vaccination, being fairly careful, and dumb luck.

I had a feeling that at some point the gig would be up. And the gig was up for me 8 or 9 days ago. I developed a cough and then a sore throat and had a pretty miserable night where I was very febrile and sweating profusely. I got up in the morning and felt like crud and decided to take a shower, which seemed like a reasonable thing to do.

The next thing I knew, I woke up on the floor of my bathroom in a puddle of blood, looked at the garbage can next to the toilet, and it had a head-shaped impression in it, where obviously my head had struck. I called my future son-in-law, who is an intern in medicine in my program. I said, "Joe, I think I probably need to go to the hospital."

I thought I just needed stitches, and it turned out it was significant head trauma. By the time it was done, I had a subdural hemorrhage, a C3 vertebral fracture (luckily, nondisplaced), and I needed about 30 stitches. And, of course, I had a diagnosis of syncope, which I think was vasovagal. As I've told many friends, only I could figure out a way of finally having COVID and having it be number five on my ER problem list.

Abraham Verghese, MD: I'm so glad that you got away with as little trauma as you did. It could have been a lot worse. I must say, I was taken again with your optimism. Katie, your wife, tweeted that your first remark to her was, "Don't tell the dog," which I thought is so typical of you.

Wachter: I've always had the attitude that if you can't laugh about it, it's not even worth talking about. People have been lovely. When I put it on Twitter, it was partly because I, like the three of us, am always looking for teachable moments.

And I thought this teachable moment was that COVID is still around; people are still going to get it. They can still get pretty darn sick even if they've been vaccinated. I thought that maybe a few hundred people would notice my tweet. And then it started going viral. CNN called and said, "We'd like to run a story." And then the next day, it appears on their homepage just below Wimbledon. Kind of pissed me off; I thought it was a better story than the Wimbledon final.

There was the usual foolishness on Twitter: "You deserve this; this is what you get for not being careful," etc. But for the most part, people have been magnificent. It's been gratifying in a way that I would have preferred avoiding.

Topol: You are healing quickly. It's gratifying to see your optimism, your ability to joke, and your willingness to share, as you say, an extraordinary teachable moment. At the same time, yesterday in The New York Times, David Leonhardt [wrote that] "the pandemic really is over." And unfortunately, a lot of people think that this virus isn't even out there anymore.

Wachter: David's article was good. The statistics are the statistics. The death rates are no longer different from the usual death rates at this time of year. That is a remarkable achievement and says something about the state of the pandemic and the state of immunity, either from vaccines or from infection or both. And it's worth celebrating. It's worth going back to something that feels a little bit closer to normal than we've lived for the last 3 or 4 years. But you have to do it with your eyes open.

When I decided a few months ago that I was now going to be willing to eat indoors, for example, I did it with my eyes completely open. And I said there's a moderate chance that at some point I'll get COVID. But the cost of not doing that seemed too high. And I also had this feeling, which I continue to have, that it's probably not going to be any better than it is now 3 years from now, 5 years from now, and it probably could be a little bit worse.

This is the new normal, and COVID will now be baked into the list of day-to-day risks that we all have. And all of us have to come to some sense of clarity of how we are going to live our lives in a way that's fulfilling and maximizes joy. For me, I decided to be a little less careful. I still have been masking on airplanes, knowing that at some point I was probably going to get COVID. I didn't think I'd get it quite like this. But it just shows that it can do bad things to people. I'm not a spring chicken anymore, but I'm not that old and really don't have any significant medical comorbidities, and it still nearly killed me.

Verghese: Talking about airplanes, Bob, as you guys know, I've been on a book tour for about 10 weeks now. I started out masking the whole time on planes and being very careful about social distancing. Gradually, it just wore me down. I was the only one doing it, and I stopped. I completely forgot about it. I think I've been incredibly lucky to still be a NOVID.

Topol: I want to get back to your important point, which I agree with about the progress we made on reducing fatality from COVID. There's no question, even if you look at excess mortality. The issue, of course, for us NOVIDs, as you were very recently, is that we're highly susceptible no matter how many shots and boosters we get because this virus is insatiable for finding new hosts, and particularly those who've never been exposed to the full virus.

So the chance of staying NOVID over more time is going to be lower. And just a footnote: I wonder how many people have a problem like you had but never even got tested because testing now has gone down to really low levels.

Wachter: I was on clinical service, so I was particularly attuned to testing because, obviously, I didn't want to bring the virus into the hospital. On the morning when I woke up and had felt really crummy for at least 12-24 hours, my first rapid test was negative. There was no question I was viremic, however. So the test isn't perfect. Six hours later in the ER I was floridly positive, so it was right at the moment it was turning. In the same way people stop wearing masks and throw caution to the wind, once they've run out of their home tests, are they going to go to Walgreens and spend $30 to buy some more? I'm guessing they're not.

Increasingly, this is just going to be folded into the set of respiratory viruses that people are going to be exposed to. It'll spread as it spreads. The job of staying a NOVID forever is going to become tougher and tougher to pull off unless you're being super-careful probably forever, which is a hard thing to do.

Topol: On this Medicine and the Machine podcast, we have been trying to get into what leaders in medicine and in computer science think about AI. With the large language models — ChatGPT, GPT-4, Bard, and others — there is a big question about whether this is just a very smart or stochastic parrot, or whether it's a higher level of intelligence than we've ever seen. What do you think about this?

Wachter: I struggle with it, and you probably need to bring on some philosophers to answer that question. If intelligence is the ability to learn from experience and to problem-solve complex problems, I don't have much doubt in my mind that these machines now can do that and in many ways can do it equal to or better than humans.

We're only looking at a single point in time. I think about GPT-4, which I use all the time now, and compare it to the state of AI that was available to folks like me a year or two ago, and there's no good reason to believe that 2 years from now, it won't be that much better. It's hard to come up with a version of the Turing test that's not super-biased in favor of humans. That doesn't say that it's meeting all of the criteria that we think of as innate intelligence. As you say, it's not actually intelligent, it's not actually a person, it does not have real empathy. But it can express it in all the appropriate circumstances and in the right way. It can solve really complex problems that are far more multidimensional and nuanced than the ones we've ever had. It can have conversations at least as articulately as I can. So I think at some point we have to say, if that's not intelligence, what does that word mean? And recognize that we're all kind of rooting for the humans because that's our team. But that can give us a bias that leads us to wrong answers.

Verghese: Talking about bias, it strikes me that we've been sitting on all this data with patients — going back many years for some patients — and yet even though it's digital, I don't think we've ever systematically tried to digest that. Now we're at this point where AI can do that for all patients. It would seem to me necessary that we cross the threshold and quickly find out what we can learn. How do you see that?

Wachter: I completely agree. Even if you're just asking how to support the physicians and other clinicians trying to do their work, the work has become impossible because of the amount of data that we're collecting and are responsible for looking at and analyzing. I've got 1000 doctors working in my department, and a lot of them are pretty grumpy. Part of the grumpiness is the amount of sifting they have to do through records, the amount of time they have to spend doing documentation, the amount of time they have to spend answering email messages from electronic inbox messages from patients that could at least theoretically either be answered by or triaged in an intelligent automated system.

We have an obligation to figure out how to use these technologies in ways that help us do what we're here to do. And we're here to make care better and safer and more equitable and more accessible and less expensive. There's not that much doubt in my mind that the technologies will help us do that.

If you look at the last 10 years of digital transformation in healthcare, you would say that the dominant theme is unanticipated consequences. It's harder than it looks. Even when the technologies look like they're going to just be massive problem-solvers, the interface between the technologies and the systems and the politics and the money and the sociology and the culture is really tricky.

I don't see any reason why this won't be even trickier because in some ways, it brings up more ethical questions than the electronic health record did. But I think we have an obligation to get it right.

At UCSF, we have 20 or 30 people who work for our health system whose job it is to take our data, analyze it using the best tools they can find to deliver clinical and business intelligence to people like me trying to take care of patients. My older son works for an organization and he analyzes their data. They also have 20 or 30 people who do that for a living. His organization is called the Atlanta Braves. And he has 20 or 30 people in his baseball analytics department, but the Atlanta Braves have a total of 400 employees. We have 20 or 30 people at UCSF, and we have 30,000 employees.

So the resources that we have put into using these technologies to get better at what we do are inadequate. Now, that will become easier as the tools get better, but we're still going to have to understand that it's going to take an institutional investment to figure out how to leverage these tools to actually improve care.

Topol: There are many different proposed applications of large language models in healthcare. The first one that you've touched on is diminishing the documentation burden, the administrative aspects of being a physician. Is UCSF doing any pilots with the companies that are launching ambient voice conversion of clinic visits into notes and all the other things that come from notes? That one seems to be potentially a near-term, at least compared to some of the other areas that you aptly point out are challenging. What do you think about that prospect?

Wachter: I have a little pride of ownership when it comes to scribes because the first article that brought scribes to national attention was written by my wife, who writes for The New York Times. It happened because I came home and I said, "Honey, you know how every other organization, every other industry, computerizes and immediately starts laying people off? Only in healthcare could we figure out a way of computerizing, and now we have to hire a person to come into the exam room to feed the computer." And she said, "That's a pretty good story." What I've said to people from the beginning is that the need for scribes is tremendous because you do not want your most expensive employee acting as a data-entry clerk.

But I wouldn't invest in a scribe company because that's always seemed to me to be a tractable problem that a human scribe — a pre-med student for $30 an hour — at some point if you collected enough data, that should be something you can automate.

I've looked at a bunch of the programs — Nuance and Augmedix and others. They clearly have mastered the basic visit. When I go to see my neurosurgeon in a week for a stereotypical visit for a neck fracture, that could be done easily. It's a little tougher for an internal medicine visit where there are eight different problems that are kind of woven together. But it does feel like if we haven't mastered it this year, it'll be in the next year or two.

That will become a normal thing. You go in to see your doctor and have a conversation. There's some little Alexa-like device sitting on the table, and the note just appears. It will be a great time-saver and will just be the start of the automation of the process, because it really isn't very much of a leap from it creating your note to then having it suggest differential diagnoses and maybe even suggest the right tests and all that sort of stuff.

I've been struck in clinical digital transformation by how extraordinarily little clinical decision support we get from our digital tools. It's partly what bothers doctors so much. We spend so much time feeding the computer and get so little useful intelligence out of it. But I think the digital scribes are sort of the lowest-hanging fruit of what will ultimately be a huge amount of decision support for clinicians, patients, and families.

Verghese: What's going to happen to medical education? We're already seeing sort of an inversion of how we teach students from the way you and I learned, where the student would go to the ER and interview the patient and gather the data and they do their physical. And now, they arrive in the ER and everything's already been done. So, what exactly is going to be the role of the medical student with a new patient? I'm trying to do a mind exercise on what this will look like with the third-year medical student, and with the residents, when so much has already been done for them. The diagnosis is done more accurately by AI, the chart is populated. So, what do they do?

Wachter: I'd love to hear your thoughts on this because when I think about this issue, you're the person I would turn to. Your article in The New England Journal about how the earlier stages of digitization were changing the dynamics of medical education was just masterful. I'll never forget the line where you wrote that the patient almost doesn't matter anymore. Patients are there to keep their medical records alive in the electronic health record. It was just brilliant. I worry a lot about it. And I think you know that some of the studies on empathy are a little bit concerning because we all said, "Well, okay, the computer can make a diagnosis and the computer can data-gather, but you're going to have to have a human to do the empathy part." And the answer may be that the computer can do some of that too. I am a little skeptical of the usual party-line answer, which is that computers and people are better together than either one alone. If you were the computer industry, that is what you would say to not piss off the doctors. But at the end of the day, you would love to replace the physicians. I think that's going to take a while.

At least for the foreseeable future, clinicians will need to be well trained using these electronic tools. They need to be able to vet the answers, which most of the time will be correct but some of the time will be nonsense. It is one of my biggest worries about computerization and training. When you look at other industries that are far ahead of us when it comes to digital transformation, you see this inevitable problem of digital complacency.

You see this problem where people begin turning their brains off because they become quite dependent on the computer doing their work for them. That's not an irrational response. Humans like to preserve their neurons. And if the computer's doing something really well, why should you bother? And then the problem is that when the computer doesn't do its thing correctly, will the humans be able to? Will they have the skills anymore? Are they awake to grab the metaphorical wheel?

There have been some high-profile accidents in aviation where the root cause was a computer malfunction and the pilots no longer knew how to fly the plane when the computer wasn't helping them. And medicine's a lot more complicated than aviation. So we will have a very funky decade or so ahead of us where our trainees will get more and more dependent on the computer, and they will get a little sloppier and a little lazier in terms of their rigor in data collection and even in thinking things through, and the computer will be wrong a fair amount of time. That's challenge number one for medical educators like us.

I was just asked to chair the Macy Foundation on how medical education is going to focus on this. And we're going to be tackling this over the next year: AI meets medical education. I think it's going to be fascinating. I'd love to hear your thoughts on where you think this is going.

Verghese: I'm not sure, but I continue to believe that we'll put a lot of premium on the phenotypic findings of the body. Until some AI robot can come and do the exam for us, my sense is that that's going to be pretty important because if we rely entirely on what was said by somebody and what the labs show, and nobody has picked up the café au lait spot, the shingles rash, the xanthelasma, then we're going to be missing something.

So that's the piece that we can do, and we can train students to do better. But it won't happen unless we have high-stakes testing because students study to the test. If there is no testing at the bedside, then they're not going to bother with it.

Wachter: A point you've always made is that the physical exam really is doing two things: It's not only an evidence-gathering activity; it's a bonding activity. And there's something quasi-religious about the laying on of hands. If the computer can look in the back of your eyeball and detect whether you have hypertension or you're at risk for 73 different diseases, how much laying on of hands will there be? All of those skills are eroding very quickly even now, and that's when the digital tools are really pretty unimpressive. As they get more impressive, it'll be very hard to hold on to them.

Topol: That goes back to my aspiration that the gift of time could ultimately be restored because part of the problem of not doing an adequate — or any — physical exam is WNL, that we never looked instead of within normal limits. It's partly because of the squeeze of time. The big business of medicine and the clinicians are spending less time with patients than ever.

But that gets me to the patient side of this because physicians tend to think more about our own community rather than the patients. And already I'm seeing patients in clinic who are using GPT-4. They're not using Google search anymore. They're getting a lot of different information from what they would have gotten from a routine Google search. There are no FDA rules about patients doing a Google search. I wonder what your thoughts are about the fact that patients are having access now to a synthesis of data that's surprisingly so up-to-date, particularly with GPT-4 that wasn't frozen back in the middle of 2021. I don't know what to call it; it's not leveling the playing field of knowledge, but it's doing something we've not seen before. What are your thoughts about it?

Wachter: It's net pretty exciting. I have always believed that democratizing care is a good thing. And you, Eric, have been a real leader in making that point and making the point that technology can facilitate that. Where democratizing care gets scary is when you've democratized it but people don't have the tools to get accurate information.

They are trying to be their own doctors, but they're dealing with BS and misinformation and all that. So, if we could guarantee that the output they get in GPT-4 is going to be accurate and a reasonable synthesis of the literature, and make clear what you can handle yourself and what you actually need to see a credentialed professional for, I think that makes the world a better place. It improves access to care. It makes patients more informed. It is interesting; until you said that, I hadn't really thought about the Google search because it just gives you a bunch of links. It doesn't really synthesize for you. You may hit five different links, and now you're clicking on each of them. This one's way too scientific for you as a patient, and this one is written in polite language, but it's not up-to-date.

There's something interesting about flipping that switch to something that's basically a synthetic CliffsNotes of the state of the literature. GPT-4 does it magically well, by and large. And the early hit on hallucinations was real and still is real, but it's less of a problem in the newer versions than the old ones.

But the fact that it's synthesizing the literature is an act that we have to approach with a little bit of skepticism. I have a C3 fracture; when should I take my neck brace off? I know when I want to take it off, which is like, now, because it's horrible. I've spoken to several different neurosurgeons and they all have different opinions. If I put it into GPT-4, which I did, it has an opinion about when I should take it off, which is a synthesis of a whole bunch of different things. But it turns out, there's no right answer. The literature does not exactly lead you to the right answer. It gives the illusion that there is one answer, but the same thing is true when you go see a doctor. The doctor is going to tell you what he or she thinks about a given situation.

It's an interesting flip in conversational tone and it's not sending you to the original links, which is both potentially useful but also potentially confusing for patients. It's giving you an answer in plain English about your situation. I think it's really exciting. I think it's a net good, but like all things that we democratize, it leads to unanticipated consequences that we're going to have to deal with.

And there are going to be more and more businesses that pop up whose basic rationale is that the healthcare system is too expensive, and you can manage yourself or we can help you manage yourself without all those expensive RNs and MDs around by using some combination of AI and GPT and some other hocus-pocus. And there'll be some people who get in trouble with that as well.

Verghese: What about regulation? I'd like to ask both of you. How do you see regulatory bodies policing the access to data? Should the data be regulated in terms of quality? Should they be paying for the data that they access that belongs to a writer, for example? Do I really want ChatGPT to go through all my novels and synthesize something from there without paying me a fee?

Wachter: The number of really thorny problems that this is going to present at every level of our society — the more you think about it, the more your head hurts. How is this going to work, how is that going to work? The issue for authors or other creatives right now is that these models can go in and sift through your five or six books and somebody could say, "Write me another book about a new topic in the style of Abraham Verghese," and it would probably do a half-decent job and have a royalty fee a little bit lower than yours. It's going to be very tricky.

In the healthcare world, the correct answer is, I don't have any idea. The idea that all the algorithms are going to be somehow vetted and regulated when, in a big healthcare organization like ours, there are going to be literally hundreds of algorithms every day, helping to manage everything from assessing fetal heart rate monitors in labor and delivery to predicting bad outcomes for patients in the OR...

And those algorithms are going to be changing all the time based on the data that's being fed into them in real time. How do you create a regulatory framework to protect against bad outcomes and bad data? I don't have any idea. I can't imagine that it's at the level of each individual algorithm the way we think about the approval of a drug by the FDA. It has to be more about the process and the way the system that's producing the algorithms has the "Good Housekeeping" seal of appropriate expertise and appropriate guardrails against biases and appropriate updating, and all that kind of stuff. But all these problems are absolutely daunting.

It can't be that the right solution is to create such a straitjacket of regulatory control that any of our three organizations can't take advantage of the data in our electronic health records and the medical literature and try to come up with algorithms that allow us to provide better, safer, and cheaper care. That can't be the answer. The answer also can't be completely unfettered Wild West. Everything in between is fair game at this point. But you need smarter people than me to try to figure this one out.

Topol: It's interesting, Abraham, that you brought that up because we now have over 500 AI medical algorithms that are cleared or approved by the FDA, and almost none are in daily practice. So the regulatory part wasn't so much the determinant, but rather implementation has not been widely adopted because there are no publications for most of these. There is no data transparency. So the willingness of health systems to fork out funds to buy these tools is limited, no less the medical community's willingness to buy into using them.

Bertalan Meskó and I just published a paper on large language models and regulatory issues in Nature Digital Medicine. As Bob aptly points out, there's a fine line. You have to catch the right balance between stifling the progress and not letting this go into jungloid activity. But perhaps beyond regulatory, it's really the lack of transparency that's a serious issue, because these companies that are creating these algorithms are rarely publishing their data. And their algorithms are typically proprietary. They don't even want to let anyone know the secret sauce. The digital transformation has lots of bumps in the road, and this is just another one of them.

There's a future beyond where we are today with not just patient questions, the front door, or not just fulfilling documentation stuff, but the more far-reaching aspects of hospital at home. That is, we have multimodal data on any given person. Why have them in a hospital? That's your frontier, the hospitalist. I'm not talking about ICUs, but the regular hospital rooms. Could you see randomized trials that are conducted with the appropriate sensors and all the other data fields, whereby patients will be safer, less expensively handled, managed, treated in their home bedroom rather than the regular hospital room?

Wachter: Easily. I don't think you need more randomized trials. For hospital-at-home, I have always contended that what you need is this: If you walk into the ER at any of our three hospitals today, and there's a patient sitting there who, let's say, looks like me last week, and you feel like the patient could be cared for just as easily at home vs 10 floors up, and to get that patient home you have to make one phone call, admit to hospital-at-home, the same phone call that you make to say "Admit to the floor," I think then it would take off.

But today, you can't make that one phone call. It's going to be the nursing, the physical therapist, the at-home IV, the oxygen, the logistics. How are we going to get an x-ray? How are we going to get a blood draw? And the only way that is going to happen is if companies emerged to basically do the supply chain to support hospital-at-home. No company is going to emerge because there's no business there.

You have this circular problem until the regulatory environment becomes more favorable, the payment environment becomes more favorable, and also enough hospitals close so that the remaining hospitals are running 100% full and really have an incentive not to admit the patient. So the economics and the regulatory environment have to be such that venture capital will invest and these companies emerge.

It's sort of happening now. I have no doubt that with more advanced sensors and AI, 10%, 20% or even 30% of hospital patients could be cared for safely at home. The literature is very supportive of that. But it's a fragile ecosystem because even in the regulatory environment, there were some concessions toward it during COVID. And Medicare has just said, all right, we're not going to re-erect the barriers, but we'll only give two more years and we'll take another look so that it's a little bit too uncertain for the entire ecosystem to come around and develop the infrastructure that I think is necessary for it. Once that's developed, I just have no doubt that it will be better. We will need fewer hospitals. The hospitals that we have will be mostly ICUs for really sick people. Most people will prefer to stay at home if you can figure out how to do it safely and cleanly. But cleanly, really, is that you can make one phone call and all of the logistics are taken care of. And that's nontrivial.

Topol: Just to wrap things up, it's pretty clear that the recent head trauma and COVID you sustained has not interfered with your brilliance and wisdom. You've passed the cognitive test. This is a super Turing test that you have transcended beyond. We really enjoyed this discussion with you, Bob. We touched on so many things, two of our most favorite topics: COVID is going to be with us for many years ahead, and then the AI future, which has its challenges. But hopefully, the net will be one that will improve medicine and improve outcomes for patients. Thanks so much for being with us.

This podcast is intended for US healthcare professionals only.

Follow Medscape on Facebook, Twitter, Instagram, and YouTube


Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.