A randomized trial of peer comparisons to improve guideline-based clinical practice in primary care

Monday, June 24, 2019: 4:15 PM
Wilson A - Mezzanine Level (Marriott Wardman Park Hotel)

Presenter: Amol Navathe

Co-Authors: Ezekiel Emanuel; Kristin Linn; Kevin Volpp

Discussant: Daniella Meeker


Because of lackluster results in physician incentive programs, there has been a surge of interest in using behavioral science to guide the design of financial and non-financial interventions. One promising such strategy is providing feedback about their performance relative to that of peers – using social comparisons to invoke the relative social ranking behavioral principle. Peer comparisons had been tested in narrow settings such as increasing guideline-based medication prescribing (antibiotics and opioids), with more recent applications together with payment changes. However, there are no studies evaluating the effectiveness of peer comparison together with broad interventions such as payment system changes.


We conducted a cluster randomized trial with the Blue Cross Blue Shield of Hawaii to examine the impact of providing peer comparison feedback to its primary care practitioners (PCPs) on the quality of care . This study included patients who were attributed to one of 86 PCPs and assigned to an intervention group receiving peer comparisons plus individual feedback and a control group receiving individual feedback alone. Feedback was provided on quality, cost, and utilization performance. All PCPs were also simultaneously moved to a new population-based primary care payment system. The primary outcome was the probability of achieving guideline-based thresholds on thirteen primary care focused quality metrics that included guideline-concordant cancer screening, prescribing for chronic conditions, and other preventative and chronic disease measures. We analyzed the primary outcome using a generalized linear model, adjusting for patient characteristics, PCP characteristics, baseline proportion of measures achieved by the patient, and a quality measure fixed effect, clustering standard errors at the PCP.


The RCT included 73,569 patients randomized via 86 physicians. Patients did not exhibit large differences across groups, with small differences in demographics and risk score. PCPs across groups similarly did not exhibit large differences across groups, with small differences in specialty and panel size. In primary analysis, the patients in the peer comparisons intervention group experienced an absolute 2.4% higher probability of achieving an eligible quality measure (95% CI 0.3% to 4.6%, p=0.03). Secondary analysis of individual measures indicated that Breast Cancer Screening (+3.9%, 95% CI 0.2% to 6.0%, p<0.001), Cervical Cancer Screening (+2.4%, 95% CI 0.01% to 4.8%, p=0.05), Colorectal Cancer Screening (+2.8%, 95% CI 0.3% to 5.2%, p=0.03), Diabetes Care – Eye Exam (+5.6%, 95% CI 1.6% to 9.5%, p = 0.006), Diabetes Care – Kidney Screening (+2.5%, 95% CI 0.4% to 4.6%, p=0.02) and Review of Chronic Conditions (+4.9%, 95% CI 0.0% to 9.8%, p=0.05) were likely the components accounting for the increased overall composite score. Many other measures demonstrated trends toward differential improvement, but associations were not significant. Cost and utilization did not demonstrate differences between arms.


A peer comparisons intervention that displayed quality information in a real-time dashboard in the setting of a broad payment system change improved quality scores by over 2 percentage points. This highlights the ability of peer comparisons to influence clinician practice in broad endpoints and is reassuring in light of new Medicare payment programs that have begun sharing comparative feedback.