Screening for West Nile Virus in Organ Transplantation: A Medical Decision Analysis

Bryce A. Kiberd; Kevin Forward


American Journal of Transplantation. 2004;4(8) 

In This Article


A medical decision analysis was created comparing two strategies, 'screen' and 'no screen'. In the 'no screen' strategy, the model assumed that deceased donors had the same likelihood of being infective for WNV as blood donors. As the prevalence varies by region and season, a range of prevalence rates were examined. Patients receiving an organ from an infected donor had a baseline case fatality rate of 0.25 (0.10-0.75; 5). As few cases have been reported, a wide range of case fatality rates was examined. The outcome of interest was life years.

In the 'screen' strategy we presumed a viral nucleic amplification detection test would be employed using either reverse transcriptase polymerase chain reaction (RT-PCR) or nucleic acid sequence-based amplification (NASBA). We did not examine the serologic test, as evidence to date has shown that the test is not reliable in detecting those that are capable of transmitting infection (although an excellent test to diagnose recent infection). In the cases of transmission through blood transfusion, the serologic tests were negative whereas the nucleic acid tests were positive.[6] In cases that are serologic IgM positive, <10% are positive for virus.[7] The viremic phase may last only 2-15 days, whereas the duration of the IgM positivity phase may last more than 90 days, with only several days of overlap.[3] We assumed that the nonreactive donors on peripheral blood samples did not have infected organs. To assume otherwise would reduce the benefit of screening.

We also assumed that the screening test would be available even for the more time-sensitive transplants such as heart and liver. We assumed that organs screening positive (including false-positives) were discarded. We estimated the test characteristics from the published and network literature ( Table 1 ). Organs that were false-negative were assumed to be transplanted and would transmit disease as in the 'no screen' strategy. The impact of a discarded organ was captured by assuming that the benefit of this transplant would be lost and that a wait-listed patient would remain on the list.

Baseline patient mortality probabilities while on the wait list and after transplantation for patients of various organs were abstracted from the United Network of Organ Sharing (UNOS) and the literature ( Table 2 ). We assumed a 25-year horizon with an annual 5% discount rate of life years. Kidney organ recipients with allografts that failed were returned to permanent dialysis with an assumed higher mortality rate than dialysis patients active on the wait list. To examine the impact on transplantation in the US, we used the numbers transplanted in the year 2002 from UNOS. We assumed that the number of combined organ transplants (i.e. heart kidney or kidney liver) were negligible. On the other hand we did not include lung, small intestine or pancreas alone transplantation. The software used was Data 4.0 (TreeAge Software, Inc., Williamstown, MA). Figure 1 shows the decision tree for a heart transplant. We examined one-, two-, and three-way sensitivity analyses for test sensitivities and specificities, wait list and post transplant mortality, and prevalence of the infected donor.

Medical decision tree for heart transplantation. Sens = sensitivity, spec = specificity, CHF = congestive heart failure health state, CHFamr = CHF annual mortality rate, Htransplant = first year after heart transplantation health state, Htransplant2 = after the first year with a heart transplantation health state, Txmr1 = mortality within the first year after heart transplantation, Txmr = mortality after the first year after heart transplantation, CFR = case fatality rate of West Nile Virus (WNV) disease.