A Practice Facilitation and Academic Detailing Intervention Can Improve Cancer Screening Rates in Primary Care Safety Net Clinics

Emily M. Mader, MPH, MPP; Chester H. Fox, MD; John W. Epling, MD, MSEd; Gary J. Noronha, MD; Carlos M. Swanger, MD; Angela M. Wisniewski, PharmD; Karen Vitale, MSEd; Amanda L. Norton, MSW; Christopher P. Morley, PhD

Disclosures

J Am Board Fam Med. 2016;29(5):533-542. 

In This Article

Methods

Practice Facilitation and Academic Detailing Intervention

Primary care practices were recruited for enrollment based on their capacity to affect a high percentage of patients among the following populations: racial/ethnic minorities, those with low socioeconomic status, the uninsured, those from geographically isolated/rural locations, and Medicaid-eligible populations. Twenty-seven practices were approached for enrollment, all of which had established relationships with 1 of the 3 participating regional PBRNs.

Physicians, nurses, and other care providers at each practice received a 1-hour, continuing medical education–accredited academic detailing session (ADS) presented by a primary care physician with expertise in cancer prevention recommendations. After completing the ADS, enrolled practices received a minimum of 6 months of practice facilitation services provided by 1 of 4 trained practice facilitators (PFs) from 2014 to 2015. The intervention period was limited to 6 months because of the 1-year funding cycle of the project. All PFs had formal training in QI coaching in the health care setting, with a minimum of 2 years' experience in the field. The focus of the practice facilitation intervention targeted evidence-based strategies to increase breast cancer, cervical cancer, and CRC screening (identified through the Centers for Disease Control and Prevention's Community Guide to Preventive Services[21]), as well as improvements related to electronic health record (EHR) data.

PFs met with key personnel at each practice, including medical directors, practice managers, and other clinical staff, to review current office workflows and policies, as well as clinic performance in cancer screening. Each practice was afforded flexibility in determining their specific interventions to accommodate differences in practice size, location, administrative structure, and performance priorities. Selected interventions were required to be considered evidence-based, as determined by the benchmarks established in the Centers for Disease Control and Prevention's Community Guide to Preventive Services. The project team leadership and the program officer of the funder jointly reviewed all interventions to ensure each met evidence-based criteria.

Data Collection and Analysis

Changes in Screening Rates. Practices reported the aggregated number of all current patients within the eligible screening population (denominator) and the number of current patients who had received appropriate screening for breast cancer, cervical cancer, or CRC (numerator), according to the most recent screening guidelines from the USPSTF and/or the American Cancer Society, at both the initiation and conclusion of the 6-month practice facilitation period. Practices chose to follow the guideline of their choice based on provider preferences and reporting procedures in effect at the time of project initiation. These numbers were used to calculate a practice-level measure of the proportion of patients who had received appropriate screening, as documented in the practice EHR, for each practice both before and after the intervention. The criteria used to define current patients generally consisted of patients seen at least once within the past 1 to 3 years, depending on practice protocols. Comparisons between cancer screening rates before and after the intervention were evaluated using 1-way repeated measures analysis of variance (ANOVA).

Changes in Practice Staff Attitudes and Experiences. Surveys were administered to clinical care team staff (physicians, nurses, care coordinators) and office administrative staff working at each practice; the PFs administered the surveys directly following the ADS and again at the close of the 6-month practice facilitation period. The surveys collected anonymous demographic information and responses to questions regarding respondent attitudes and experiences (using a 5-point Likert scale). The language and question items used in the surveys were adapted from the National Cancer Institute's Survey of Primary Care Physicians' Recommendations & Practice for Breast, Cervical, Colorectal, & Lung Cancer Screening[22] and National Surveys of Colorectal Cancer Screening Policies & Practices,[23] as well as surveys developed by Houser et al[24] and the Michigan Department of Community Health;[25] the survey language was altered to adapt questions to a 5-point Likert scale structure. Unique identifiers were used to link the survey information from before and after the intervention; mean scores on responses were compared between the 2 measurement periods through paired-samples t tests.

Focus groups were conducted at each of the participating practices to solicit feedback on barriers to cancer screening, experiences working with PFs in the intervention, and suggestions for intervention improvement; key informant interviews were conducted when focus groups could not be convened because of practice staff constraints. The participants targeted for inclusion were those identified by the PFs at each practice site as having been most directly involved in the implementation of the project, and they included both clinical providers and administrative staff. The focus groups were hosted at the practice offices, whereas interviews were conducted via a telephone conference call; all focus groups and interviews were conducted by a member of the project leadership team trained in qualitative interviewing techniques. PFs were excluded from any focus group and interview activities pertaining to their assigned practices to reduce bias in participant responses.

All focus groups and interviews were audio-recorded and transcribed verbatim for analysis; no names or other personally identifiable information were recorded in the transcripts. Two members of the project team jointly conducted a thematic content analysis of the transcripts, developing a set of codes based on a list of areas of interest determined a priori: barriers to increasing cancer screening, factors important to the working relationship with the PF, factors important for sustainable change, and feedback on project processes. Discrepancies between the coding schemes were resolved verbally, and the finalized themes and concepts were reviewed by the larger team.

Practice Readiness for Transformation. The TRANSLATE model was used to assess the intervention's impact on key elements of practice transformation. The TRANSLATE model is an assessment tool that measures elements of practice improvement and has been used by researchers conducting similar interventions focused on diabetes care and chronic kidney disease management.[26,27] TRANSLATE stands for "set your Target, use Registry and Reminder systems, get Administrative buy-in, Network information systems, Site coordination, Local Physician Champion, Audit and feedback, Team approach, and Education."[27] Each practice was rated twice by their assigned PF for each of these 9 elements—once after the initial visit and once at the conclusion of project activities. Each TRANSLATE category was rated on a scale of 1 to 4, with 1 signifying no accomplishment and 4 signifying the highest accomplishment.

PFs received training on the use of the TRANSLATE model before engaging with their practices, including an overview of definitions and a review of practice characteristics meriting each score level. The TRANSLATE assessments were used by PFs as a guide for the work completed with each practice and as a measurement tool for systems-level change within each practice at the conclusion of the project. Practice-level changes in TRANSLATE element scores before and after the intervention and the influence of PF practice groupings were evaluated through a 1-way mixed ANOVA. Spearman correlations were calculated to assess the relationship between both the cancer screening rates and TRANSLATE element scores after the intervention.

Human Subjects Protection

The Institutional Review Board of State University of New York Upstate Medical University determined that this QI project did not meet the definition of human subjects research. All individuals and practices participating in the project were provided information regarding the voluntary nature of participation, and no individually identifiable information was collected.

processing....