How Much Time Do I Have Left? Go With Gestalt or Data?

F. Perry Wilson, MD, MSCE


June 01, 2022

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I'm Dr F. Perry Wilson of the Yale School of Medicine.

This is Cassandra.

You remember Cassandra. Daughter of king Priam of Troy, she had the gift of prophecy — everything she foresaw would come true — but was cursed in that no one would believe her. She warned her brother, Paris, that the abduction of Helen would bring about the destruction of Troy. But did he listen? Well, the rest is history.

We live in an age of digital Cassandras. With advancements in artificial intelligence and machine learning, our ability to predict outcomes in our patients is getting scarily good. Will we listen to those predictions and change our ways? We may want to because according to a new study from JAMA Network Open, the machines are now better at this than we are.

Metastatic cancer is a devastating diagnosis that comes with a lot of difficult decisions. Should we continue to the next line of chemotherapy? Go for a clinical trial? Aim for palliation? Informing all those decisions is a simple prediction based on a question that every doctor has heard before: "How much time do I have left?"

This is not a conversation that any of us like to have with patients, but it is an important one. It is a conversation where honesty and accuracy are key. But by and large, doctors are overly optimistic when they try to predict the life expectancy of patients with advanced cancer and other serious diseases. I don't think this is due to sugarcoating, really — more like wishful thinking. But the fact is that when we are discussing these issues with patients, we may not be accurate.

In the face of that inaccuracy, many of us default to agnosticism — the old "I don't have a crystal ball" trope. My impression is that patients don't like this very much.

But what if you could have a crystal ball? Or perhaps a silicon one. This is exactly what this paper promises.

In brief, researchers led by Dr Finly Zachariah at City of Hope National Medical Center used data from nearly 30,000 patients with metastatic cancer to build a machine-learning model that took in all sorts of clinical variables, from lab values to ICD-9 codes, to predict 3-month mortality.


They then built this model into the electronic health record, allowing it to make predictions in real-time and at the same time that clinical oncologists answered a simple question: "Would you be surprised if this patient were to die within the next 3 months?"

This question creates a simple binary analysis: "Yes I would be surprised" or "No, I wouldn't." The oncologists said "not surprised" about 13% of the time. As you can see, the patients in that not-surprised group have substantially higher mortality than those in the surprised group.


But note that by 3 months, more than 10% of people in the surprised group had died. Because the surprised group is way bigger than the not-surprised group, it turns out that 70% of all the deaths that occur in this patient population in the first 3 months are surprises.

Put bluntly, an oncologist saying that they would not be surprised that a patient would die within 3 months is a poor prognostic sign. But them saying they would be surprised is not terribly reassuring.

Oncologists are optimistic. They correctly identified only about 30% of the patients who died within that 3-month period.

But what about the cold, unfeeling machine? Well, one wrinkle is that the machine doesn't give a binary prediction. Surprised or not, it gives a number. Think of it like a percent chance of death — from zero to 100.

To level the playing field, the researchers picked a cut-off point of prediction that would also identify only 30% of the people who would die within a 3-month period. But what's interesting is that at that cut-off point, the people flagged as high risk of dying are much more likely to die. In other words, there are way fewer false positives.


Putting it together, if an oncologist says they would not be surprised that a patient would die within 3 months, it's a poor prognostic sign. If the computer says so, it is an exceptionally poor prognostic sign.

The computer prediction was artificially set so that it would miss 70% of these deaths, just like an oncologist, but this is not totally necessary. Depending on where you set the cut-off point, you could capture more of these deaths, with an increased risk for false positives, or fewer of these deaths with a lower risk for false positives, a relationship captured in this precision-recall curve.


And of course, with the computer model, you don't have to set a cut-off point at all. You could, in theory, tell a patient that they have a 65% chance of dying within 3 months — although I think that the stark granularity of a prediction like that might be particularly distressing.

In any case, the authors have fairly conclusively proven that the model performs better than a physician; at the pace artificial intelligence is improving, the models will only get better. So now what? What do we do with Cassandra's prediction?

Multiple studies have shown that early referral to hospice actually extends the life of patients with metastatic cancer. Should models like this be used to target hospice services? Or perhaps to flag patients who might do better with more therapy? Will physicians allow these models to override their own judgment? These questions represent the next frontier of artificial intelligence in medicine. The predictions, at this point, are good. It's time to figure out whether we'll listen to them.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale's Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and hosts a repository of his communication work at

Follow Medscape on Facebook, Twitter, Instagram, and YouTube


Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.