Figures often beguile me, particularly when I have the arranging of them myself; in which case the remark attributed to Disraeli would often apply with justice and force: “There are three kinds of lies: lies, damned lies and statistics.”
– Mark Twain’s Own Autobiography: The Chapters from the North American Review
In an effort to win your business, some consultants perform and present a detailed analysis of how much your reimbursement will likely increase if you use their services. This is typically based on an analysis of your historical billing data, and compares your CC and MCC “capture rates” to 80th percentiles of a purported peer group. They carefully show you where your data says you are with regard to the percentage of DRGs in various groupings that represent ones with a complication/comorbidity (“CC”) and ones with a major complication/comorbidity (“MCC”). This is referred to as your CC and MCC “capture rate”. The potential increase in reimbursement is calculated based on what might be realized if you are able to reach the 80th percentile target for these capture rates.
This analysis is based on the assumption that a proper peer group has been selected for the data comparison. But should your hospital’s data be similar to the chosen peer group, and more importantly, what was used to select the group for comparison?
Here are just a few of the criteria that might influence the accuracy of such as comparison; some are obvious, some are less obvious, but each reflects a potential pitfall in applying anyone else’s data to your hospital’s expected performance:
And, not to be forgotten, is the overall problem of trying to compare across years when the DRG weights and multipliers vary year-to-year. This also impacts the ability to analyze your own data and do year-to-year comparisons.
So, the approach of using CC and MCC capture rates to predict how adding a CDI program to your hospital is fraught with uncertainties. It tells you the potential opportunity, but is not an accurate prediction of actual your expected results!
A far better metric to analyze is actual performance. What has been the experience of this consultant, this product, or this approach at previous customer sites? Did those customers have a CDI program at all, or was that an initial implementation of a program.
Don’t rely on guesses. Get data on actual experience and real performance.
If you’d like more information, want to schedule a one-on-one demonstration, or just want to let us know what you think, please fill out the form below and we’ll contact you as soon as possible.