The things we can do with health data are expanding and maturing toward a value-based, precision medicine future. That was apparent at the annual Health DataPalooza in Washington DC, an annual event that drew about 2.000 consumers, patient advocates, healthcare providers, insurers and health plans, researchers, innovators, policymakers, and many more industry leaders for three days of networking and learning.
My takeaway from the event, along with many other health data conversations recently:
- “Data liberación”, the battle cry originally thrust into the healthcare vernacular by former HHS CTO Todd Park at a previous Health DataPalooza, and the revolution it embodies, has become more than just government data stores. It’s about freeing our data to enable precision medicine.
- With precision medicine, there’s more attention paid to flows of data, getting it to where it’s needed. The converse is perhaps “information blocking” — when data flow is being stifled, intentionally or, many patient advocates argue, even unintentionally. A new JAMIA article takes a stab at defining interoperability (finally), what data flow would look like.
- The quality of data has suffered under the fee-for-service model, as information about clinical encounters is stuffed into a framework meant to capture billing codes. Good news: this is starting to change.
Data Liberación gets personal
A revolution has been brewing on patient access to data since HIMSS in April when it was announced that Meaningful Use Stage 2 would no longer require 5% of patients have access to view, download, transmit their health data. Only one patient in a year is now required for eligible providers to qualify for meaningful use incentives under the proposed change. Many patient advocates saw this as a betrayal to core principles of patient access and have started the #NoMUWithoutMe campaign with a national day of action planned for July 4. Check out http://getmyhealthdata.org/ for updates on this. I highly recommend ePatient Dave’s response and comments to the MU changes.
Code for America and the National Partnership for Women and Families announced, with others, the effort to track the issues people have trying to get their health record. Karen DeSalvo talked about people owning their data. Data liberación is becoming a personal revolution.
But what’s needed to do that, of course, is systemic and at least partially defined by system architecture and understanding what’s doing the blocking, intentionally or unintentionally.
Architecture, #infoblocking, and Flow
APIs were at the center of several on and off stage conversation, and there was genuine excitement about FHIR and the Argonaut Project. The CEO of Box talked at length about cloud solutions as well as the need for open APIs.
Information blocking has become a hot topic. The ONC says “information blocking occurs when persons or entities knowingly and unreasonably interfere with the exchange or use of electronic health information.” They may be a tough thing to prove. I suspect most info-blocking may be intentional, but appears a lot like neglect. The undercurrent among these conversations and the ONC’s information blocking report is that patient data and true interoperability are being held hostage by a few vendors.
For all the talk of interoperability in Congress, at the ONC, and among patient advocates calling for better access, it’s amazing that nobody tried to define it in terms of architecture and ability until just last week. Fortunately, Dean Sittig and Adam Wright have come up with a way to measure interoperability. They call the framework EXTREME, for EXtract, TRansmit, Exchange, Move, Embed. They’ve provided a list of use cases and functionality to define openness and interoperability of EHRs in a recent JAMIA article. I like how they look at the perspective of each stakeholder. I look forward to seeing various EHRs measured against these criteria. There’s been way too much marketing for way too long claiming interoperability when what they really mean is intra-operability.
Can legacy, fee-for-service systems provide the quality data to make it happen?
A broader question is, even if these systems somehow become truly interoperable, are they capturing the right data for value-based care?
Last month, I moderated a panel on “Health Data Innovation and Value-based Health” at the Colorado Digital Health Summit (#codigitalhealth) with leaders from Colorado HIEs, the All Payer Claims Database, and Lumiata’s CEO Ash Damle (who depends on accessing stores of health data to drive analytics), and Jon Hernandez, CEO of Integrated Health Hub who works with PeakMed Primary Care in Colorado Springs.
I asked the question: “Are yesterday’s systems going to need to be replaced? Are yesterday’s systems unable to provide what’s needed for value-based care?” There was a lot of nodding, including Hernandez, who’s been building a new solution to provide the needed functionality for value-based care.
There’s a consensus mounting that systems built for fee for service and assembly line care are not going to be able to adapt to a value-based world. The quality of data is at issue. Word from those on the front lines have been that less than only about 50% of data at the point of care is reliable, partially because of the need to stuff the encounter into a set of billing codes, and not collecting the data that matters for improving outcomes.
According to a follow-up conversation I had with Hernandez:
“As Total Cost of Care (TCOC) analysis is making its way into healthcare, along with impact analysis associated with measuring value, outcomes, and costs…(and) the data associated with outcomes and costs …can be quite complex and hard to access. The real question is ‘How much of this data can be provided by today’s Fee for Service (FFS)-driven EHR systems?’ In my opinion, EHR products are not ready to take on the value equation.”
Presumably, Total Cost of Ownership (TCO) of the EHR enters into the TCOC equation. In many cases, TCO is driving some systems broke.
Much of what we call a “record” in the future will look like a conversation, and record-keeping will fall into the background. On this subject, I had the opportunity to sit down with Andy Altorfer, CEO of CirrusMD, on the question of what a “health record” will look like in the future. CirrusMD is currently integrated to provide data to a Colorado HIE that supports a closed loop for online patient-physician encounters.
According to Andy,
“Text messaging conversations, which represent more than 80% of our traffic, are self-documenting so there’s no need for click-intensive documentation for the physician, aside from a brief Progress Note to summarize an encounter. The doctors we work with simply type a succinct description of care provided and recommended treatment plan. We are generally paid outside of reimbursement so there’s no need for billing codes. …This will inevitably lead to better data quality…to better document what actually happens in an encounter without stuffing the interaction into billing codes. Furthermore, a text-first workflow allows a doctor to engage in a direct diagnostic process – ask any doctor how their friends and family reach out to them when sick, the answer is always ‘they text me.’
Data quality and ownership costs are big factors for the value-driven health care future that HHS wants to see, and is, in large part, on it’s way to implementation by 2018.
Will providers ultimately need to switch vendors and architectures for a value-based future?
That’s a question for you. I welcome comments on how Health IT architecture affects risk, costs, experience and outcomes, how fee-for-service affects data quality, and what we need to do to get health data more personal. These will be key topics in the months and years ahead.
Government health data stores have been a great start for more open health data (which are now more accessible than ever with access to CMS claims data). Now the health data community is becoming more concerned with the flow of data so that data can be used to drive value-based care. It’s a whole new world of health data that’s coming.