Featured Articles: Healthcare Reform

Data Aggregation - A Primer

Friday, February 12, 2016   (0 Comments)
Share |

 

Clayton Gillett
VP of Data Services and Integration
OCHIN


Healthcare is in the midst of a profound business model change. And we are all aware that the old model of “fee-for-service” medicine is over, and a new model is quickly emerging. On December 12, 2013, the Dartmouth Institute for Health Policy and Clinical Practice published the first real evidence that the management of the patient across the continuum of care can bend the cost curve of our ever- aging population. The new shared savings model that has developed is designed to deliver seamless, high-quality care for patients, replacing the fragmented care that we see in the fee-for-service payment system. In the fee-for-service model, different providers receive different, disconnected payments. But the new model is designed to maintain a patient-centered focus by developing processes to promote evidence-based medicine, patient engagement, and report on quality. To deliver on that promise, health systems need greatly enhanced data aggregation tools; however, it is difficult to evaluate these tools, and many do not understand how the system needs translate into data requirements. This article is designed to present some of those issues in context.


In order to qualify for the shared savings model, a coordinated care organization (CCO) in Oregon (or accountable care organization (ACO) elsewhere) has to prove that they have documented plans to promote evidence-based care, patient engagement, report on quality, and cost metrics and provide coordinated care. This is a lofty goal in a closed health care system where inpatient, outpatient, home health, and mental health are controlled by the same infrastructure. Even in these closed systems it is a technically challenging to integrate the data across these systems. Managing a population over diverse EHRs, claims, and other systems is neither a simple, nor an inexpensive, task. Many assume that data can be shared across systems at a level of fluidity that has only been successfully demonstrated in a few places. In fact, the HIMSS EMR Adoption Model, defines Stage 7 which is the highest stage, by “…the implementation of an advanced data warehouse with continuity between inpatient and outpatient data within a single system.” This model has been useful in describing how effective the implementation of an EHR really is. It should also be noted in the HIMSS Analytics reports that only 7.8% of EHR adoptions have achieved this level of integration as of Q3 2015. Integrating off-the-shelf products, enhancing current EHR products to meet the technical requirements, and then effectively integrating these products into the delivery system workflows is the paramount initiative in most health care systems today.

 

Without advanced data aggregation tools, provider groups or health care systems can’t report on when a patient goes to a hospital outside of their system. And while it is true that it will eventually be noted on the claims data that is reported back from the insurance company or third-party administrator, this is long after the data is clinically relevant for any intervention. Furthermore, the detail and breadth of the data is also important, as it becomes clear that data not traditionally classified as health care data is a good predictor of future costs. This brings up two critical points in the use of these data aggregation systems: one, the data needs to come from the greatest breadth of sites possible; and two, the data needs to be timely to be used to intervene with patients in a clinical setting. These factors are critical to the application of the data in a real-world situation.

Data aggregation is used for more than just real-time clinical intervention. If the purpose of a data aggregation system is to report out yearly performance data to the federal government, then the 90-day data lag inherent in claims data will have no negative impact. However, if you want to know how to improve quality on the meaningful use cervical cancer measure, and you are using a PDSA (“Plan Do Study Act”) improvement methodology, a 90-day lag in data will not work. This methodology requires week-to-week (or, ideally, a day-to-day) data collection.

 

If you want to intervene with a patient immediately after a mental health encounter, or after a change in homeless status, you need a way to get that key data into the system as well as mechanism to alert appropriate providers in that system. If you want to customize your care to the needs and circumstances of a specific patient, then you need to understand a number of data points, like: what services are available in his or her community; information about general social situation, or even how well a student is doing in school. This data is not classically seen as health care data. That being said, it has been shown that some of this data is a good predictor of future health care costs. And in the shared savings model, predicting future health care costs is the holy grail of improved care and cost savings.

 

Given the multitude of uses and complexity of the data, it may be useful to borrow from what we learned in the EHR (electronic health record) market. HIMSS Analytics put forward a model for analyzing the complete implementation of all components of an EHR with 7 stages (HIMSS Analytics, 2016). As you move toward the advanced stages, the model focuses more on the how the EHR system is implemented and integrated rather than the functionality incorporated in the system; therefore, this suggests that the key to high-quality care is in having the broadest possible data set. Currently, HIMSS Analytics suggests that a mere 60% of all EHR implementations are achieving stage 5 implementation in this model, even after nearly $40 billion in incentives have been invested. For purposes of understanding data aggregation systems, a similar model may also be useful. Below you will find a model for evaluating the capability of a data aggregation system that focuses on three critical elements, the breadth of the data sources, the timeliness of the data and the tools available in the aggregation system for analysis.  Each of these elements contributes to how the data aggregation system meets its purposes.   

 

Data aggregation systems rarely perform well across all aspects but often perform well in a specific section. The most common example of this are systems that manage clinical information or claims data well alone, but do not do integrate both sets of data in a cohesive manner.

 

These are complex systems that often require greater integration than your EHR vendor required, and the implementation of a mature system generally takes months, not weeks. Initial implementation may provide some functionality; however, complete implementation may continue for a year (or more) if you do not have mechanisms to consolidate data already in place. 

 

The selection of a data aggregation system is a complex task that should be considered carefully using an RFP and/or outside consulting support. Make sure to include implementation milestones that will be required in the RFP to meet your functionality requirements. Ask your vendor for references from organizations similar to yours, so that you can understand how they are using the system. Ask for references from organizations that have integrated with the same systems you have in place. Also make sure to remember that data aggregation is still a relatively immature market. This makes for a diverse vendor community and many of systems that show promise in one area or another. Like EHRs 10-15 years ago, it is not clear who will be the market leader, and many aspects of this emerging market are, as of yet, unknown. Many of us recognize names like IDX, MedicaLogic, LastWord, Misys, Eclipsys, and A4 as major players in the EHR market from 10 years ago; yet these no longer exist as independent organizations today. These players became part of what we now know as GE and Allscripts. But this story is typical in our space today, and we should expect the same product cycle evolution in the data aggregation market. That being said, your selection process should analyze where the vendors are likely to be in the future and if they are likely to be purchased by another vendor. In 2015, several of the market leaders changed ownership and that trend is likely to continue.

 

In this environment, the purchase of a data aggregation system can be risky; but it is required for many systems to compete in the evolving health care market place. Ensure that you understand what functionality you need now, and what you will need in three years, first. The model above was designed to help you evaluate your complex data needs for functionality. Mapping the reason you want a data aggregation system back to the data needs and the timeliness of the data helps in understanding what can be accomplished and in what time line.  Be sure to connect with a client with a similar business plan for the tool and ensure the application is working in that environment as expected before evaluating the rest of the criteria.

 

A detailed plan about what you want from your data aggregation system is the first step in understanding the gap between where you are and where you need to be.  Mapping how a data aggregation system translates the purpose you are trying to achieve will help you select the right system and understand the timeline for achieving your goals.

 

NWRPCA welcomes and regularly publishes white papers and articles submitted by members, partners and associates with subject matter expertise. The appearance of any guest publication in our Health Center News database represents the views of the author and does not constitute endorsement by NWRPCA of the stated opinions or perspectives, nor does it suggest endorsement of the contributor's products or services.

 


Membership Software Powered by YourMembership  ::  Legal