COMMENTARY

Huge Databases and Meta-Analyses: Can We Trust Them to Treat?

Henry R. Black, MD; Andrew J. Vickers, DPhil

Disclosures

June 10, 2011

This feature requires the newest version of Flash. You can download it here.

Henry R. Black, MD: Hi. I'm Dr. Henry Black, Clinical Professor of Internal Medicine at the New York University School of Medicine and immediate past President of the American Society of Hypertension. I'm here with my friend and colleague, Dr. Andrew Vickers from Sloan-Kettering talking about biostatistics in the 21st century.

I want to ask you about large databases that can't replace clinical trials for the reasons that we think clinical trials are better (because we can control bias and we know a little bit about the individuals). Large databases with millions of individuals are grinding out more and more information. Do you think they have a place in clinical decision-making?

Andrew J. Vickers, DPhil: Absolutely. One of the best examples is the data that we are seeing on volume and outcomes. For example, if you have cancer and you go to a physician who treats a lot of cancer patients, you are far more likely to be cured and far less likely to have side effects than if you went to a physician who treated cancers as well as a number of other diseases and had a low cancer volume. This is exactly the sort of the data that you can get out of these large administrative databases. That is just not something you can get from clinical trials. It seems unlikely we would ever do that sort of thing as a clinical trial.

Dr. Black: One of the things that, in fact, I talked about here a year or two ago was the big DA [Disease Analyzer] database. The question was, "Do angiotensin receptor blockers (ARBs) cause angioedema the way ACE [angiotensin-converting enzyme] inhibitors do?" They apologized -- they couldn't answer that question because only 99,000 people had received ARBs, whereas they had a million and a half people who had taken ACE inhibitors. So 99,000 weren't enough patients to give us an answer. Still, it's probably not the same as doing an appropriate-sized trial where we give our cohort or our volunteers one or the other drug to come to a conclusion.

Dr. Vickers: Right. With randomization, you never have to worry about selection bias. That's the purpose of randomization, whereas you always worry about selection with these administrative databases. The volume outcome studies were criticized for years, but this is totally wrong. People said, "The high-volume physicians select the better patients and that's why they have the better results." If it had been a randomized trial, we never would have that worry. Now in the volume outcome this year, it was really seen not to be as very important.

Dr. Black: I was thinking it would be the other way around, that the high-volume surgeons would get the worst cases.

Dr. Vickers: Exactly. It was not really seen as a big problem. But now when you are looking at something like surgery or radiotherapy, you ask, is that a good treatment for early-stage prostate cancer? Maybe we can just download Medicare. We know all the men get prostate cancer. We know what they are billing for, so we know what treatments they received. We can see whether they died of prostate cancer or not. How about using that instead of doing a clinical trial? We keep trying to do the clinical trials on prostate cancer. We can't do them. We are much more worried about bias because clearly men are self-selecting to different treatments and urologists are selecting who gets surgery and who gets other management.

Dr. Black: I was part of SELECT [Selenium and Vitamin E Cancer Prevention Trial], which used selenium and vitamin E for prostate cancer, a 2-arm, factorial design.

Dr. Vickers: A randomized trial.

Dr. Black: A randomized trial, very large, showing no benefit from the treatment together or against placebo. Is that more compelling to you?

Dr. Vickers: When you have a randomized trial, you are always going to believe those results more than the results of an observational study. With these databases, you can do studies that you either can't do because they are unethical or because they are too expensive. Some of the things we might see are rare adverse events. I have recently seen some ideas for using administrative databases to look for rare vaccine side effects. We are inoculating millions of people to find events of very, very low prevalence. You are just not going to be able to do that in a randomized trial.

Dr. Black: This brings us to another thing that is used to address the same question. If I can't mount a trial big enough, what about meta-analyses? Can you tell me your criteria for a good meta-analysis and a not-so-good meta-analysis?

Dr. Vickers: I wish it were that simple. If I asked you to give me criteria for a good doctor or a bad doctor, you would put a bunch of things in there. The main thing that you want to think about in a meta-analysis is that you are taking the literature that has been published and you're combining that and trying to review that in some way. Just like any other study you do, that follows a protocol; if you're doing a trial, you can say, "Here are the patients we are going to include. Here's how we are going to analyze -- here are the data that we are going to get from these patients. Here is the analysis that we're going to do." It's exactly the same in a meta-analysis. You say, "Here are the papers we're going to include, and here are our criteria for including these papers. Here are the data we're going to get from these papers. And here are the analyses that we are going to do."

Dr. Black: So that they start with 6000 papers and end up talking about 15?

Dr. Vickers: Right. That's very common. I have been involved in meta-analysis and that's what you do because you would rather read through 6000 papers and find that they are almost all inappropriate than miss some papers.

Dr. Black: The concern that we're not publishing negative data is a little bit less than it used to be because you have to register your trial when you start. If it isn't registered, you can't get it published. So I think we've dealt with that. That was a concern.

Dr. Vickers: The issue of publication is still a worry. What most good meta-analyses will do is a special test that allows you to look for publication bias. For example, if a trial is big, it's going to be published whatever the results are. If it's small, chances are that if there is an amazing positive result, it's going to get published. If it's a negative result, the study authors will say, "Oh, it's kind of small. We're not going to publish."

Dr. Black: But if the study is small, then it's probably not going to have much impact.

Dr. Vickers: Yes. So I'm not going to publish it. What you do as a statistician is say, "Is there an important difference between the results of the large trials and the small trials?" If you see that difference, it suggests that there are some small trials that are not being published, and that will be evidence of publication bias.

Dr. Black: We have seen the opposite effect. For example, a large prospective meta-analysis was done by the hypertension trialist group where they had to enter the data before you know the answer (which is the way to do it).

Dr. Vickers: Absolutely. Data sharing is the only way to go. What you are essentially talking about (there are a lot of these groups and I have to give a shout out to Oxford, which is where I did my doctorate) is that they started an individual patient data meta-analysis. One approach to meta-analysis (the most common approach) is to get the papers and say, "Well, let's see what Blogs said. Blogs said the ratio was 1.5 and Peterson said it was 1.6. So we'll combine them and that's an average of 1.55. We will see it that way."

The alternative is to say, "Send me the data from your trial. We will get all the individual patient data and then we will run an analysis on that ourselves." I have been involved in those meta-analyses and they are certainly the soundest.

Dr. Black: Yes, that approach is much better. The issue with our hypertension trialist group is that it's tough sometimes to get the individual data. There were 160,000 volunteers in the initial hypertension trialist study. Forty thousand came from ALL-HAT [Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial]. Here the answer is influenced by a large study, as opposed to smaller studies, of which not much can be made. What is the difference between a random-effects model and a fixed-effects model in a meta-analysis?

Dr. Vickers: I'm not biased, but I would say ignore the random effects because I'm a fixed effects kind of guy. Essentially, a fixed-effects model says, "Let's look at the results of the trials that we actually have in front of us and come to an average of those." The random-effects model says, "Let's imagine that there was a hypothetical universe of a hypothetical population of trials, and let's randomly sample from those." How that breaks down statistically is that random-effects models down-weight the effects of large trials. Fixed-effects models weight large trials more heavily.

Dr. Black: Okay. Dr. Vickers, thank you very much. I enjoyed this conversation and I hope you have as well.

Dr. Vickers: Yes. Thank you for having me.

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.
Post as:

processing....