Identifying High-Impact and Managing Low-Impact Assessment Practices

Kristin K. Janke, PhD; Katherine A. Kelley, PhD; Beth A. Martin, PhD; Mary E. Ray, PharmD; Burgunda V. Sweet, PharmD


Am J Pharm Educ. 2019;83(7) 

In This Article

Abstract and Introduction


Those in pharmacy education who are tasked with assessment may be overwhelmed by deadlines, data collection, and reporting, leaving little time to pause and examine the effectiveness of their efforts. However, assessment practices must be evaluated for their impact, including their ability to answer important questions, use resources effectively, and contribute to meaningful educational change. Often assessments are implemented, but then attention is diverted to another assessment before the data from the former assessment can be fully interpreted or used. To maximize the impact of assessment practices, tough and uncomfortable decisions may need to be made. In this paper, we suggest an approach for examining and making decisions about assessment activities and provide guidance on building high-impact assessment practices, evolving or "sunsetting" low-impact assessment practices, and managing mandated assessment.


Since the 2009 American Association of Colleges of Pharmacy Curricular Summit, curricular transformation has been openly encouraged and pursued among colleges of pharmacy in the United States.[1] The release of the 2013 Center for Advancement of Pharmacy Education Outcomes[2] and Accreditation Council for Pharmacy Education's Standards 2016[3] added to the fervor for curricular redesign and evolution. With this aim in mind, decisions for change should be based on evidence, and transformation in education should be guided by assessment.

This work is not easy. While resources have been developed to support formative assessment strategies[4] and guide the development of assessment leads,[5] colleges and schools may nonetheless struggle to develop their assessment operations. Ever-increasing mandates for evidence of program effectiveness have driven demands for more metrics and benchmarks. In addition to pharmacy accreditation requirements, universities have additional reporting obligations. Collectively, there is a seemingly endless list of required data that sometimes makes the work of assessment feel like little more than checking off boxes.

Pharmacy faculty members, committees, and administrators should integrate assessments responsibly, use continuous quality improvement processes, and work to establish a culture of assessment within pharmacy schools.[6] Yet, this is easier said than done. Intellectually, we know that gathering and managing reams of data does not create the culture we seek. And though technology may seem like an answer to managing the workload, it may simply make it easier to ask faculty members to supply ever-increasing levels of minutia. As assessment enterprises grow and mature, there may be the appearance of progress because people are working, analyses are conducted, and reports are produced. However, despite the flurry of activity, is it possible we are still not answering important questions that help us drive evidence-based improvement and change? Has assessment practice become, as Gilbert suggests, "similar to surgeons patting themselves on the back for taking out tumors without checking to see if their interventions are affecting mortality rates"?[7]

As organizations, pharmacy schools are susceptible to getting caught up in the assessment routine, churning through endless cycles of data collection and reporting. Thoughtful decisions must be made to foster sustainable and impactful assessment and to aid the movement towards a culture of assessment. The academy needs well-intentioned and well-designed assessments with attention given to implementation, fidelity, and quality improvement. We should be asking how we can refine and improve assessment processes. To aid in asking important questions to ultimately drive positive educational change, we outline an approach here for examining and making decisions about assessment activities. Given a recent commentary in higher education assessment that calls for clarity in the language of assessment,[8] we also provide working definitions for the concepts introduced.