Growth in Proportion and Disparities of HIV PrEP Use Among Key Populations Identified in the United States National Goals

Systematic Review and Meta-Analysis of Published Surveys

Emiko Kamitani, PhD; Wayne D. Johnson, PhD; Megan E. Wichser, MPH; Adebukola H. Adegbite, MPH; Mary M. Mullins, MSLS; Theresa Ann Sipe, PhD

Disclosures

J Acquir Immune Defic Syndr. 2020;84(4):379-386. 

In This Article

Methods

We implemented a 2-step systematic literature search to identify PrEP-related citations in the CDC HIV/AIDS Prevention Research Synthesis (PRS) Project database. The process of creating a comprehensive systematic literature search strategy in MEDLINE, EMBASE, PsycINFO, and CINAHL for the PRS database has been previously published (see Appendix I, Step 1, Supplemental Digital Content, https://links.lww.com/QAI/B453).[13] The search for this review consisted of several queries of the PRS database (see Appendix I, Step 2, Supplemental Digital Content, https://links.lww.com/QAI/B453) and reference list checks of included citations, with the last search taking place in April 2018. We also searched for any newly published literature in PubMed using HIV and PrEP terms (last searched May 2019). All identified citations were uploaded to DistillerSR (Evidence Partners, Ottawa, Canada) for screening, data abstraction, and quality assessment.

Screening, Data Abstraction, and Quality Assessment

Inclusion criteria for this review were as follows: (1) primary studies with the number or proportion of PrEP users reported among the study participants, (2) implementation in the United States, and (3) publication in English. A reviewer screened titles and abstracts to identify PrEP primary studies meeting criteria. Exclusions were validated by a second reviewer. Next, full texts of identified studies were screened again for eligibility. To abstract data from eligible studies (N), only data from baseline assessment were included for prospective and intervention studies. Studies that used the same survey/data set were carefully screened, and only data from unique samples and subgroups (k) were included in this review. If studies spanned multiple years, the midpoint of the time span was used to represent the year. For studies reporting the proportions for both lifetime and current/recent use, we used the lifetime proportion. Full text screening, data abstraction, and risk of bias assessment were conducted by 2 independent reviewers; conflicts were resolved through discussion. We contacted authors to obtain additional information for studies that did not report the data needed for this review.[14]

Risk of bias was assessed using a Newcastle–Ottawa Scale.[15] This scale adapted for cross-sectional studies assesses qualities in 5 domains, namely, selection of participants, sample size, comparability of respondents, ascertainment of PrEP uptake, and quality of descriptive statistics reporting. We further adapted this scale for this review. A strength of this scale is that validity and reliability has been refined and established over time by several experts, whereas the simple risk of bias scales is that they may not assess other biases.[16–19] The total scores were calculated by counting the number of "Yes" responses. The total possible value was 5 points (0 to 5), with 3 or more points considered "low risk of bias."[16–19]

Data Synthesis

The review describes characteristics of included studies using narrative synthesis. First, studies were categorized as focusing on 1 or more of 7 key populations. Studies combining MSM and transgender women and studies presumed to focus on MSM (eg, participants surveyed at gay pride events) were considered MSM studies. Next, because most studies focused on MSM, we separated out non-MSM and MSM studies. For non-MSM studies, we created subgroups for each key population (eg, black non-MSM). We also created 2 time groups based on CDC PrEP clinical guidelines: pre-(in/before 2014) and post-(after 2014).

We conducted a series of analyses. First, we estimated pooled proportions of participants reporting PrEP use and included all studies across years by using fixed-effects and random-effects models in the meta-analyses because the proportions varied across studies. Random-effects models are conducted by adding a variance component tau-squared (estimating the variance among the true effect sizes) to each study-specific variance.[20] Second, to assess the trends of PrEP use, we estimated the pooled proportion of PrEP use for each year of study. Third, we repeated this analysis for each key population and non-MSM subgroup in recent years (2015–2017). For assessing differences in PrEP use, we compared 95% confidence interval (CI) for odds ratios (OR). Fourth, we ran multivariable logistic regression models to estimate adjusted OR for the MSM and non-MSM overall groups. Finally, mixed-effects logistic regression models estimated crude OR for growth rates of PrEP use pre-/post-CDC PrEP clinical guideline era for overall, key populations, and non-MSM subgroup key populations.

We assessed heterogeneity with the I2 index that quantifies the degree of true heterogeneity (ie, between-study variability) rather than variability due to sampling error within studies; heterogeneity greater than or equal to 75% was considered high because this indicates that 3/4 of the total variability among effect sizes is caused by variation between studies, and this could lead to biased interpretation of results.[20,21] To reduce conceptual heterogeneity caused by population and outcome variations, we grouped studies into 3 categories based on the number of studies as well as considering the FDA approval in 2012 and the clinical guideline release in 2014:[22] 2004–2012, 2013–2014, and 2015–2017; then, we sorted them by key populations and non-MSM subgroups.

We conducted sensitivity analyses to assess bias. For studies reporting no PrEP use among study participants, we applied continuity correction and added 0.5 (half of an individual) to event and nonevent values to compute logit event rates.[23,24] To assess the bias caused by this strategy, we compared the estimated overall proportion for all identified studies to the proportion without studies with zero proportions. To assess the stability of the results, we used fixed-effects models to present simple percentages in each category. We conducted multivariable logistic regression on the comparison of proportions between the overall MSM and non-MSM subgroups in recent years, excluding surveys reporting use in past 12 months (ie, including ever, current, and past 6-month use). To assess bias because of overlap between key populations (eg, black MSM), we also conducted sensitivity analyses comparing the estimated population for each key population strata with the proportions for each subgroup strata (eg, blacks strata vs. black MSM substrata).

All meta-analyses were conducted using Comprehensive Meta-Analysis Software Versions 2 and 3 (Biostat, Englewood, NJ). P values less than 0.05 and nonoverlap of 95% CI for OR were considered statistically significant.

processing....