Results
Thirty publications describing 39 studies were included in the meta-analysis.[5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34] In the following text, each publication is referred to as one study, even if reporting both pre- and postadmission events.
All 30 publications included in the meta-analysis were located through our replicated search, and all met inclusion criteria. We identified 4 additional and apparently eligible citations not included in the meta-analysis.[35,36,37,38] Two[35,36] were reports from the Boston Collaborative Drug Surveillance Program and, thus, were linked to 2 studies included in the meta-analysis.[21,22] It is unclear why the remaining 2 studies[37,38] were excluded from the meta-analysis, because both studies report the number of patients with drug reactions. Five additional citations were published after the meta-analysis search cutoff date,[39,40,41,42,43] but because these were outside the main objectives of this critique, they will not be considered further at this time.
Of 30 publications included, 18 reported preadmission ADRs and 18 reported postadmission ADRs, as shown in Table 2. Our preadmission study numbers are comprised somewhat differently than the 21 reported in the meta-analysis. In that report, the 21 preadmission studies included 5 hospital reports from one study.[22] We considered this publication as one multi-site report, reducing the total preadmission study number by 4 to 17. However, we also found that one study listed in the meta-analysis as reporting only postadmission results also contained preadmission results bringing the total of preadmission studies to 18.[17]
Eight studies were conducted between 1960 and 1970, 7 between 1970 and 1980, 3 between 1980 and 1990, and 4 after 1990. In addition, 4 studies crossed 2 decades, and4 studies (published in 1976, 1980, 1984 and 1994) did not report the specific time period studied.
Most studies were conducted in teaching hospitals (k=23), and 3 were conducted in nonteaching hospitals. Three other studies reported results from a multiple site collaboration, including teaching and nonteaching hospitals, and one did not report the type of hospital in which the study was conducted. Most studies monitored medical and surgical wards only, 4 studies monitored pediatric wards, 2 monitored psychiatric wards, and 9 studies monitored multiple units, including intensive care units, obstetrics units, or both.
Study patients monitored were limited to the elderly in 2 studies, adults in 4 studies, pediatric in 4 studies, and all patients admitted in 19 studies. One study did not report the type of patients monitored. Furthermore, patient information was sparsely reported. Although age was sometimes reported in those cases with ADRs, age for all monitored patients was reported in only 6 studies. Average number of drugs per patient for all monitored patients was reported in 7 studies.
Surveillance techniques to identify ADRs varied widely: patient and family interviews; spontaneous reporting of incidents; chart reviews for predefined indicators; reviews of discharge summaries, diagnosis codes, and medication sheets. Surveillance was performed by a variety of personnel, including house staff, nurses, and pharmacists. In many of the original studies, ADRs were not verified by a second reviewer, and when a second reviewer was employed, the inter-rater reliability was rarely reported. Surveillance performed in a manner blind to patient outcomes was rarely noted. In some studies, the primary objective was not to identify ADRs, but to identify iatrogenic illness[16,26,28,32] or to compare one reporting system with another.[8,15,30]
In determining the incidence of ADRs in each source study, numerous sources of heterogeneity were observed. These sources of heterogeneity are described further below are summarized in Table 3, and are listed by study in Tables 4 and 5.
Event definition. In the included study set, event definitions varied widely, from the WHO definition of adverse reactions in one study only,[5] to definitions that seemed to approximate but not completely overlap the WHO definition, to definitions that were quite different from WHO.[10,11,15,18,19,20,21,22,29,31] Nine studies did not provide adverse events definition at all.[9,14,17,23,24,25,26,28,32] The remaining seven studies provide other event definitions, such as ADE,[28] drug-induced illness,[12] drug-related admission (including ADRs and drug noncompliance),[13] iatrogenic disease,[16] and drug side-effects.[27,33] ADRs, therefore, were not defined with sufficient consistency across studies to permit pooling of study results in a meta-analysis.
Event preventability. Many studies did not use an event definition which allowed assignment of preventability. We used clinical judgment to label some events as preventable (eg, medication errors or overdoses or allergic reactions) and others as not preventable (ie, idiosyncratic reactions), but it is not clear how the original meta-analysts dealt with these ambiguities. Only 8 studies clearly specified nonpreventable adverse reactions.[5,6,7,8,9,16,30,34]
Derivation of numerators. The meta-analysis reported using ADR incidences that were calculated by dividing the number of patients with events (for all-severity, serious and fatal events) by the total number of admissions. Although the number of patients with all-severity events was reported in most papers, it was rarely reported for serious events or probable/definite events. In several studies, only the number of events, not the number of patients with events, was categorized as serious vs nonserious, or possible vs probable/definite.
In studies not reporting number of patients with events, the meta-analysts derived the number of patients with events from the number of events. For example, one study[11] reported a total of 304 reactions occurring in 237 patients. This same study also reported that 46% of the 304 reactions were judged definitely or probably related to the drug and 10% were judged severe. The meta-analysts apparently imputed the number of patients with probable/definite ADRs by multiplying the number of patients with events by the proportion of serious events (46% of 237). The number of patients with serious events was derived in a similar way.
Another study of 830 patients reported 405 ADRs (including errors in administration) in 291 patients.[10] Of these 405 events, 293 reactions were investigated and 69% (or 202) were found to be definitely or probably related to a drug. The meta-analysts imputed the number of patients with probable/definite events by applying 69% to the total number of patients with ADRs (291) and concluded that 200 patients had probable/definite events. The study also reported that 25% of the 293 investigated reactions were major ADRs. The meta-analysts seem to have derived the number of patients with major, probable/definite events by applying the proportion of major events to the number of patients with probable/definite events, which was already an imputed numerator.
Therefore, we determined that the ADR incidence calculations in the meta-analysis were based on numerators which were, in some cases, imputed. It is difficult to know whether these derived numbers consistently overestimate or underestimate the rates of ADRs. They do add considerable imprecision to the resulting analyses, however. For example, if serious ADRs are not distributed evenly across patients, but are more likely to occur frequently in a few patients, a derived number of patients with serious ADRs will overestimate the real number.
More important is the variability and lack of precision imputation adds to any subsequent estimation procedure if estimated data are used in the calculations. In some cases, this imputation was done in several steps, introducing a wider margin of variability in each additional step.
Lastly, for studies not reporting the number of patients with events and in which data could not be derived, as described above, the meta-analysts apparently used the number of events as numerators for their calculations.[5,6,7] This again may lead to an overestimation of the incidence, because the number of events can be much greater than the number of patients experiencing these events.
Choice of denominators. All of the above concerns relate to the choice of numerator for incidence calculations. However, the chosen denominator is also arguably incorrect. Ideally, it should be the number of patients using prescription drugs, as opposed to the number of patients admitted to the hospital.
Drug relationship. According to the description of the methodology for the meta-analysis, possible events were removed from the incidence calculations. We found only 6 preadmission event studies[15,20,24,25,29,31] and 5 postadmission event studies[8,15,29,31,34] that provided the number of patients with "probable/definite ADRs" as distinct from possible events. The remaining studies either did not report whether drug relationship was assessed[6,7,9,12,13,14,17,18,19,21,22,23,26,27,28,30,32,33] or did report possible vs probable/definite relationship[5,10,11,16,26] but only by number of events not by patients.
These studies with no mention of whether ADRs included possible events were included in the meta-analysis. No sensitivity analyses evaluating the impact of these studies on the results were reported.
The identification of serious events was not as straightforward as initially expected. It was noted in some preadmission event studies[14,15,17] that in some cases, the ADR was not the main reason for admission, but rather a coincident event to the actual cause of the admission. This observation belies the assumption that all preadmission ADRs were, by definition, serious because they resulted in hospitalization. In addition, only one postadmission study reported serious events.[8]
Similarly, the incidence of fatal events was not often reported. In total, there were 6 of 18 (33%) preadmission event studies[12,18,19,20,24,29] and 10 of 18 (55%) postadmission event studies reporting number of deaths, including three studies that specifically reported observing no ADR-related death.[5,6,7,15,19,21,28,29,31,34] No other studies reported deaths. One study reported the number of deaths caused by iatrogenic diseases (including procedure complications) but did not provide this number for drug-related events as a distinct category of event.[16]
An analysis of fatal event incidences by using only those studies specifically reporting a fatal ADR incidence, as the meta-analysts did, is likely to dramatically overestimate the death rate, because this approach does not take into account studies where no deaths were reported. If for all studies that reported any events but did not mention fatal events, we assume that there were no ADR-related deaths and include these studies in the meta-analysis, the pooled incidence might be a more accurate reflection of the real incidence of drug-related deaths among hospitalized patients.
Furthermore, in the studies reporting deaths, the strength of relationship to the drug was assessed in only 3 preadmission[20,24,29] and 4 postadmission[15,29,31,34] event studies. Preventability of fatal events was not addressed in any preadmission event studies, and in only 4 postadmission event studies.[5,6,7,34]
Our attempt to replicate the statistical methodologies in the meta-analysis was hindered by lack of detail in the original report. More importantly, given the many problems in the source studies described above, we decided it was inappropriate to pool such data in a meta-analysis. Furthermore, concerns of pooling data in the face of the other issues described below all mitigated against a decision to proceed with a meta-analysis.
Bias. Several sources of potential bias may impact the meta-analysts results and conclusions. These include both ascertainment and publication bias. Ascertainment bias may derive from the preponderance of teaching hospitals represented in the study set. These hospitals may be more likely to have higher ADR and mortality rates compared with primary care facilities because of the severity of disease in these patients and the patient referral patterns. This is evident in Table 5 in the publication of the original meta-analysis, which shows a higher mean age, longer length of stay, and a greater proportion of males in the studies in the meta-analysis, in comparison with those reported in all US hospitals. These are all factors that may be associated with higher mortality. Publication bias may be present if nonteaching community hospitals are less likely to publish ADR surveillance results. Because these facilities may well have less severely ill patients, an exclusion of studies from these facilities may lead to an overestimate of the incidence of events. The meta-analysts did state that they attempted to address publication bias in mailings to selected researchers, but no details of this methodology or its results were given.
Heterogeneity. The meta-analysts did not explore statistical heterogeneity in their results, except to note the use of the random effects model, and reliance upon information from confidence intervals. It is not clear how the meta-analysts estimated variance for each study. No statistical assessment of the degree of heterogeneity present in the meta-analysis models is given, nor are any results from post hoc analyses used to adjust for the sources of heterogeneity examined. Following the fit of a random-effects model and meta-regression analysis, sensitivity analyses of these results can identify specific studies and specific covariates which are important sources of heterogeneity. After this identification, additional meta-analyses could be fit, excluding studies with significant covariates and study heterogeneity. This analysis was not performed or discussed in the meta-analysis.
The meta-analysts do caution that methodologic variability of source studies is a limitation of meta-analysis, yet, they proceed to pool data anyway. When such great heterogeneity exists, as in the source studies here, we contend it is inappropriate to combine results in a meta-analysis.
Extrapolation with small numbers. With regard to fatal ADRs, the incidences are very low. The use of such small numbers in calculations and imputations is likely to introduce large errors. There will be a bias toward inflating these rates when the reported numbers are so small. Extrapolations based on these small numbers for the overall fatal ADR incidence using the total hospital admissions in the general population as the denominator can grossly overestimate the incidence of fatal ADRs.
Another issue related to small numbers is the width of the reported confidence intervals. The usual method to estimate the standard deviation is based on the normal approximation to the binomial distribution and is not valid when there are less than five observations in a given category. Twelve of the 16 studies reporting any deaths reported fewer than five fatalities. The resulting confidence interval using the normal approximation will underestimate the exact confidence interval. Results based on such small samples are inherently unstable, unreliable, and not meaningful.
Multiple comparisons. In the meta-analysis, no correction for correlated outcomes is made using multiple comparison techniques. The meta-analysts did not address that there is a hierarchical basis for the ADRs captured and reported in their results. Fatal ADRs are a subset of serious ADRs which, in turn, are a subset of all severity ADRs. It is, therefore, not unexpected that the results of these three types of ADRs will be similar. These analyses should be adjusted for the correlation among the ADR groups. Adjusting for these multiple comparisons will increase the width of the confidence interval for the incidence estimates, i.e., reducing precision of any estimates.
© 2000 Medscape
Cite this: Adverse Drug Reactions in Hospitalized Patients: A Critique of a Meta-analysis - Medscape - Apr 27, 2000.