Luck of the Draw: the Role of Chance in the Assignment of Medicare Readmission Penalties

Monday, June 11, 2018: 4:10 PM
Basswood - Garden Level (Emory Conference Center Hotel)

Presenter: Andrew Wilcock

Co-Authors: Jose Escarce; Peter Huckfeldt; Neeraj Sood; Ioana Popescu; Teryl Nuckols

Discussant: Souvik Banerjee


Unplanned hospital readmissions in the Medicare Fee-for-service program are expensive and not uncommon. With the aim of incentivizing higher quality care and lowering the financial burden from hospitalizations, the Medicare Hospital Readmission Reduction Program (HRRP) assigns a penalty to hospitals with “excess” readmissions for any of the program conditions over a prior three year period. The penalty is generated in two steps: first, Medicare estimates condition specific excess readmission rates (ERRs) for all eligible hospitals, then Medicare calculates the penalty as the proportion of the hospital’s total HRRP diagnosis related group (DRG) payments due to “excessive” readmissions (i.e., those above the national average). Since the beginning of the program there have been concerns that the penalty may not accurately profile high versus low performing hospitals. And because so much of observed hospital readmission rates are unexplained by the case-mix adjustment used in the ERR calculations, the final penalties are highly dependent on the patients hospitals get over an evaluation period—a fact largely out of their control.

This paper quantifies the role of chance (i.e., random patient draws) in the assignment of final HRRP hospital penalties. Our strategy was to compare the actual HRRP penalties to a set of counterfactual penalties based on plausibly different sets of patients randomly sampled from hospitals’ own HRRP discharges over the same evaluation period. Using CMS MedPAR inpatient claims from 2010 through 2013, we created an analytical dataset of all the discharges for the five conditions the HRRP evaluated in FY 2015. Adhering to program methodology, we simulated 1,000 counterfactual penalties for each HRRP hospital in our study sample by repeating a three-step routine—sample, model, calculate—1,000 times. In step one, we randomly sampled with replacement an alternative combination of condition specific discharges at each hospital, drawing each sample from a hospital’s actual set of condition specific discharges from the 2015 evaluation period. In step two, we modeled counterfactual predicted and expected readmissions rates for each hospital-condition. In the third and final step, we calculated counterfactual ERRs for each hospital-condition, and then calculated each hospital’s counterfactual penalty by combining the counterfactual ERRs with the hospital’s actual DRG weights from the 2015 evaluation period.

For our analysis, we divided hospitals into “profile” groups based on the size of their actual penalty in 2015 and tabulated the percent of the time counterfactual penalty profiles were the same as the actual profile and the percent of the time they were different. Overall, we found that more often than not (53% of the time) hospitals would have received a different penalty profile than the one they actually got with a different patient draw. The correspondence varied by the actual penalty profile in 2015: the best and worst performing hospitals were more likely to get the same profile with random patient draws (77% and 68% respectively), while hospitals in the middle were likely to get different profiles (64% of the time).