This article in the Boston Globe lays out the concern emerging from the hospital community over the "safety" data that was recently released publicly by CMS (http://www.hospitalcompare.hhs.gov/), and is also planned to be included in the "value based purchasing" calculations that will penalize hospitals who have low "quality scores".
Even those who fared well on the rankings note that this metric was not intended for the purpose for which it is being used. One can certainly glean some insights from the AHRQ indicators that are used to calculate the patient safety scores. Having reviewed the respective data for a particular healthcare institution, I did find it to be helpful and a reasonably accurate reflection of our patient care. However, as the billing data from which these measures are derived are not generally constructed with the delivery of quality clinical care in mind, the data cannot be presented as a highly reliable picture of the quality of care being delivered at an institution.
This isn't a question of using data, or comparing hospitals, or posting the data publicly - although each of these initiatives may independently inspire criticism as well. This isn't even about those institutions not faring well in a head-to-head competition crying foul. The central issue remains that one can't take major shortcuts in data gathering if one if trying to properly incentivize and motivate the system to improve. This doesn't work in an individual medical center or clinic - as one of the first tenets of performance improvement is to ensure that the data we share with our physicians and other clinicians is meaningful and reliable - nor does it work for the entire healthcare system.
The intention may be proper, but displaying such data publicly, expecting consumers to make healthcare decisions based upon it, and furthermore penalizing hospitals for not performing better on the same scale has the risk of coming across as a desperate maneuver to reduce costs in the guise of quality.
Even those who fared well on the rankings note that this metric was not intended for the purpose for which it is being used. One can certainly glean some insights from the AHRQ indicators that are used to calculate the patient safety scores. Having reviewed the respective data for a particular healthcare institution, I did find it to be helpful and a reasonably accurate reflection of our patient care. However, as the billing data from which these measures are derived are not generally constructed with the delivery of quality clinical care in mind, the data cannot be presented as a highly reliable picture of the quality of care being delivered at an institution.
This isn't a question of using data, or comparing hospitals, or posting the data publicly - although each of these initiatives may independently inspire criticism as well. This isn't even about those institutions not faring well in a head-to-head competition crying foul. The central issue remains that one can't take major shortcuts in data gathering if one if trying to properly incentivize and motivate the system to improve. This doesn't work in an individual medical center or clinic - as one of the first tenets of performance improvement is to ensure that the data we share with our physicians and other clinicians is meaningful and reliable - nor does it work for the entire healthcare system.
The intention may be proper, but displaying such data publicly, expecting consumers to make healthcare decisions based upon it, and furthermore penalizing hospitals for not performing better on the same scale has the risk of coming across as a desperate maneuver to reduce costs in the guise of quality.
No comments:
Post a Comment