Most Americans Think They Drive Better Than Average: Why They Are Right and What This Means for Medicine

Andrew J. Vickers, PhD

Disclosures

May 27, 2009

The idea that most of us could be below average risk sounds like a joke from Lake Wobegone. However, as it happens, it is true: If we know any factor associated with risk, then -- almost inevitably -- most people will be below average risk. In the case of driving, each year in the United States about 20,000 drivers are killed in traffic crashes. Because there are about 200 million US drivers, this gives each driver about a yearly 0.01% risk of being killed while driving. If my neighbor were to ask me her risk, is that the number I should give? Consider the fact that about one third of traffic fatalities are alcohol-related, but that only about 1 in 12 Americans drive drunk in a given year. Do some simple math, and you get a risk of 0.04% for those who drive drunk and 0.007% for the rest of us. This is below the average risk of 0.01%, and we can conclude that most Americans do indeed drive better than average.

The implications of medicine should be obvious: Risk and risk reduction are ubiquitous in medical decisions about treatment, screening, and prevention. As a simple example from my own field, we might tell a man with prostate cancer that he has a 20% probability of recurrence after surgery. Proponents of "individualized medicine" claim that by using information about the individual patient, such as stage and grade of the tumor, or even perhaps a genomic analysis, we can give a more "accurate" estimate of individual risk. However, we can use the driving example to go further and say that individualized risk prediction will most commonly lower risk estimates. In the case of prostate cancer, having a very high-grade tumor (about 10% of patients), or one that has spread outside the prostate (about 30% of patients), dramatically increases the risk for recurrence to 65% and 40%, respectively. If I build a statistical model using stage and grade to predict recurrence, and use this model to give a cohort of patients an individualized risk prediction, it turns out that about 70% have individualized risks less than the average of 20%. Just as in the driving example, most are below average risk.

Some of the best examples of this effect come from cardiovascular medicine. A man who recently had a heart attack is at increased risk for death from a subsequent heart attack; therefore, we might advise percutaneous coronary intervention (PCI) and thrombolytic therapy as a risk-reduction strategy. We might typically use the results of a randomized trial to estimate the risk with and without PCI, such as citing an overall mortality rate of 4% vs 6%, to suggest that PCI reduces deaths by 2%. Our driving and prostate cancer examples suggest that most patients will be at lower risk than average, and will therefore derive lower than the average 2% benefit from adding PCI to thrombolytics. Indeed, this has been shown empirically by David Kent and colleagues at Tufts Medical Center, Boston, Massachusetts. He found that, although some high-risk patients do derive substantial benefit from PCI, most patients gain only trivial reductions in risk for mortality.[1]

In brief, using averages in medicine leads to overestimation of risk, because most patients are at below average risk. Overestimation of risk leads to overscreening, overdiagnosis, and overtreatment. This simple insight should motivate greater research on and clinical use of statistical prediction models for risk stratification.

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.
Post as:

processing....