This study used a mixed-methods concurrent triangulation design to investigate the unobserved subgroups of staff who provide HIV testing in Florida and how staff characteristics impact engagement in PrEP implementation. In this study, PrEP implementation is described as the degree to which PrEP is discussed and/or a referral to a prescriber occurs. This study was conducted in Florida, a state which has both rural and urban designations, as well as high rates of HIV incidence. Data were collected, analyzed concurrently, and triangulated during data analysis and interpretation. The study was approved by the Institutional Review Board at the University of South Florida. Participants electronically provided informed consent.
Participants (ie, HIV testing staff) were recruited between February and May of 2018 through email to complete a 15- to 20-minute online assessment administered through Qualtrics. Contact information for publicly funded HIV testing sites in Florida is freely available on the Internet. Administrators at each community-based, publicly funded testing site were contacted through email with a request to share the survey with staff who perform HIV testing and counseling. Unsuccessful attempts and requests for no further contact were logged daily. Organizations were contacted up to 4 times (ie, a prenotice, followed by up to 3 additional contacts that include the survey link). At the end of the quantitative assessment, participants were asked if they would like to enter a raffle for 1 of 3 $50 gift cards and their willingness to be contacted for an in-depth interview. Interview participants were selected from those who indicated interest by quota sampling to ensure inclusion of participants with a diverse range of PrEP implementation experiences. Participants who took part in the qualitative interview received a $20 gift card.
PrEP implementation group was determined using a latent class analysis (LCA) based on how clients answered a predetermined set of questions regarding multifaceted PrEP implementation: (1) Overall, how often do you talk to clients about PrEP when testing/counseling for HIV?; (2) I talk to clients about PrEP every time I test for HIV; (3) I talk to clients about PrEP when I think they might be eligible (meet the indications to start taking PrEP); (4) I give physical information about PrEP (such as pamphlets, flyers, and written contact information for PrEP-friendly providers) to clients during HIV testing/counseling; and (5) Overall, how often do you give clients physical information about PrEP (such as pamphlets, flyers, and written contact information for PrEP-friendly providers) during HIV testing/counseling? All items were categorical and measured on a 5-point scale (items 1 and 5 from "I never do this" to "every time"; items 2, 3, and 4 from "strongly disagree" to "strongly agree"). This study referred to PrEP as a once daily pill, as emerging methods of PrEP use such as on-demand PrEP were still being investigated at the time of this study. Participant characteristics were assessed, including age, gender, sexual orientation, race, ethnicity, employment status, HIV status, and previous or current personal use of PrEP. The in-depth interview guide was based on the Consolidated Framework for Implementation Research.
A total of 150 HIV testing staff from 48 organizations were included in quantitative analysis. The qualitative sample size was based on saturation, when no new themes emerge from the data. Saturation was reached at 22 participants.
Quantitative data were exported from Qualtrics into SPSS v.24. Data were cleaned and examined for suspicious and repeat responses. Forty-nine participants were excluded from analysis: 12 did not meet inclusion criteria, thus were unable to continue to the survey and an additional 18 did not proceed past the consent. Nineteen participants completed between 34% and 55% of the survey. These participants had not yet completed demographic questions, so it was not possible to compare their demographic information to those who completed the survey in its entirety. However, when comparing key variables (eg, existence/nonexistence of an organizational PrEP policy), these participants were not significantly different than the analytic sample. In addition, 21 IP addresses were listed more than once. This was expected, as some organizations have shared IP addresses between employees. These responses were determined to be unique based on investigation into survey answers and demographic characteristics. Descriptive statistics of the remaining analytic sample were conducted.
LCA[34,35] was used to determine PrEP implementation groups. The LCA was performed using MPlus v.8. All other analyses were performed using SPSS v.24. The LCA technique groups participants based on similarities in how they answer a predetermined set of questions. Participants are then categorized based on the likelihood that they belonged to a given class. Five items were included in the LCA (see Table 3 under "results"), asking participants to rank the degree to which they participated in various dissemination activities related to PrEP. The final LCA and corresponding latent classes were determined based on fit indices [eg, BIC (Bayesian Information Criterion)] and LMR (Lo-Mendell-Ruben)] and theoretical interpretation. In interpreting BIC, the lower the score the better fit of the model. For LMR, it is suggested that researchers find the model which produces a nonsignificant LMR value, and use 1 less class (k-1). Theoretical interpretation included examining how participant responses related to the existing literature.
To account for clustering (ie, the groups of participants working within the same organization), generalized linear mixed models with multinomial distribution, logit link, and robust variance estimator were used to estimate PrEP implementation as a function of key demographic characteristics.
Qualitative interviews were transcribed, verified by the primary author to ensure correct transcription, imported into MaxQDA and analyzed thematically. The primary author segmented all transcripts based on topic. An initial codebook was created based on the Consolidated Framework for Implementation Research guidelines and emerging codes that arose while conducting the interviews and verifying transcription accuracy. Two researchers trained in qualitative data analysis coded the same transcript independently before discussing revisions and edits for the codebook. The same researchers coded an additional transcript to refine the codebook. After agreement on the codebook, 4 transcripts were independently coded to calculate inter-rater reliability (IRR). At this first attempt, the overall Kappa was K = 0.75. The same 2 coders reviewed these transcripts, discussed interpretation and clarification of codes, and again attempted IRR with 4 new transcripts. IRR was reached, with an overall K = 0.86. The remaining transcripts (n = 12) were coded by the primary author. Trustworthiness of the qualitative data was examined using Guba's model of trustworthiness of qualitative research.[40–42]
Data integration was included in the study design, methods, and interpretation. During data collection and analysis, connecting was used to identify participants for the qualitative phase.[26,43] Data were also merged after data collection, a technique that involves combining the quantitative and qualitative data sets.[43,44] During data interpretation, a narrative approach and utilization of joint display of data were used to integrate the data.
J Acquir Immune Defic Syndr. 2020;83(5):467-474. © 2020 Lippincott Williams & Wilkins