What Are We Measuring?

Evaluating Physician-Specific Satisfaction Scores Between Emergency Departments

Brian Sharp, MD; Jordan Johnson, MD; Azita G. Hamedani, MD, MPH, MBA; Emilia B Hakes, MD; Brian W. Patterson, MD, MPH


Western J Emerg Med. 2019;20(3):454-459. 

In This Article

Abstract and Introduction


Introduction: Most emergency departments (ED) use patient experience surveys (i.e., Press Ganey) that include specific physician assessment fields. Our ED group currently staffs two EDs – one at a large, tertiary-care hospital, and the other at a small, affiliated, community site. Both are staffed by the same physicians. The goals of this study were to determine whether Press Ganey ED satisfaction scores for emergency physicians working at two different sites were consistent between sites, and to identify factors contributing to any variation.

Methods: We conducted a retrospective study of patients seen at either ED between September 2015 and March 2016 who returned a Press Ganey satisfaction survey. We compiled a database linking the patient visit with his or her responses on a 1–5 scale to questions that included "overall rating of emergency room care" and five physician-specific questions. Operational metrics including time to room, time to physician, overall length of stay, labs received, prescriptions received, demographic data, and the attending physician were also linked. We averaged scores for physicians staffing both EDs and compared them between sites using t-tests. Multiple logistic regression was used to determine the impact of visit-specific metrics on survey scores.

Results: A total of 1,012 ED patients met the inclusion criteria (site 1=457; site 2=555). The overall rating-of-care metric was significantly lower at the tertiary-care hospital ED compared to our lower volume ED (4.30 vs 4.65). The same trend was observed when the five doctor-specific metrics were summed (22.06 vs 23.32). Factors that correlated with higher scores included arrival-to-first-attending time (p=0.013) and arrival-to-ED-departure time (p=0.038), both of which were longer at the tertiary-care hospital ED.

Conclusion: Press Ganey satisfaction scores for the same group of emergency physicians varied significantly between sites. This suggests that these scores are more dependent on site-specific factors, such as wait times, than a true representation of the quality of care provided by the physician.


Under the Affordable Care Act, increasing emphasis has been placed on delivery of healthcare that is both patient-centered and high quality with the aim of incentivizing better value and outcomes.[1,2] While an improved patient experience likely contributes to improved quality of care and outcomes, measurement of this facet of quality is difficult to accomplish.[3,4] Currently, this measurement typically involves patient survey scores assessing both the overall experience and specific aspects of the emergency department (ED) visit, including a physician-specific section. Increasingly, payers are using these scores to modify provider reimbursement.[5]

Numerous studies conducted in the ED have demonstrated the many factors that influence patients' satisfaction with their visits. While good communication, attitude and interpersonal skills demonstrated by ED staff are associated with increased patient satisfaction scores, factors such as wait time, patient demographics and acuity, as well as crowding, also influence scores.[6–20] Some studies have even suggested that higher patient satisfaction scores are tied to more drug prescriptions and advanced imaging.[3,4,21]

Regarding physician-specific metrics, Bendesky et al. in 2016 showed that patient satisfaction scores differed for emergency physicians (EP) based on the setting in which they were practicing. Specifically, satisfaction scores were consistently lower in an ED setting when compared to an urgent care. This finding suggests that even metrics that attempt to narrowly assess the patient-provider relationship are subject to external factors.[22] Given that urgent cares have been found to be viewed favorably in terms of quality and value among patients, further study is needed to control for site-specific effects on patient satisfaction.[23]

In August 2015 our health system opened a second ED at a university-affiliated site that is staffed by the same emergency medicine faculty group. There are some operational differences between the sites, including consultant availability as well as the level of involvement of residents and advanced practice providers (APP) in care. However, most ancillary services offered are largely identical, including radiology studies (radiograph, computed tomography, ultrasound, magnetic resonance imaging) and lab services. This presents an ideal scenario to compare physician-specific Press Ganey ratings. Our objective was to evaluate consistency of physician-specific patient satisfaction scores between the two sites.