Who is accountable when a self-driving vehicle gets into an accident?
As automakers scramble to bring new and innovative autonomous vehicles and features to market, the legal world remains unprepared for robots to take over the roads. “Autonomous” isn’t even clearly defined for transportation technology and law enforcement purposes: there are already cars on the market that can self-park, engage emergency stopping, assist with changing lanes, and more. None of these new vehicles come without a steering wheel, and the expectation of a human driver.
The notion that fully autonomous cars might not have any visible pedals, steering controls, or even a driver’s seat changes the assumption that, if all else fails, a human can still assume command – and accountability. But if robot cars still require drivers to be considered road-safe, well, what’s the point?
Healthcare, as yet, has not embraced self-driving ambulances, diagnostic chat bots, or robotic surgeons operating independently of human clinicians. Technology is certainly infiltrating the industry, but liability issues still circle back to the human professionals in charge of their respective realms. Even without the vexing question of robots and autonomous devices, though, liability issues are under strain from the ambiguity of how technology changes expectations—under the law, in the eyes of consumers, and among caregivers themselves.
Who’s driving this thing?
Like the automakers, healthcare professionals are encountering the unprecedented dangers that accompany progress and innovation.
Clinical organizations are still trying to get their data consolidated and structured so that it can actually be made use of. Yet EHRs themselves present a bevy of risks with respect to obscuring liability and opening providers up to potential legal challenges. This isn’t just about the complexities of HIPAA and privacy or even the security of digital communications, records, and devices–though liability for digital security is, with good reason, front of mind for most healthcare organizations today.
The more people there are involved in creating, sharing, and maintaining the EHR, the more chances there are for problems and inaccuracies to be created, duplicated, and missed down the line across the continuum of care. This is especially pronounced during the protracted onboarding and implementation process—that is, exactly where most providers are currently. The combination of experienced technology users and either resistant (or incompetent) users is a hurricane of unforced errors that get virtually laminated into the permanent, distributed, digital system.
No stone left unturned
Already, nearly a third of medical malpractice lawsuits are for “not considering all medical information.” Thanks to EHRs, “all medical information” encompasses an exponentially growing body of data.
Despite protestations from caregivers and regulators, the future of healthcare data seems destined to take a “everything is relevant” approach, integrating the constant output of wearables, along with social data scraped from the Internet, the collective notes of providers across the continuum, and possibly even patient-composed additions (once we decide patients own their electronic records, and deserve at least partial editorial control of them). Rather than front-loading medical education with terminology and memorization, the entire system will pivot toward data-management and analytics.
And humans are less well-suited to this task than machines.
When something like Watson is available, lending the advanced cognitive computing power of smart machines to the heaps of digital data, stakeholders can at least broadly agree that turning healthcare into another exercise in Big Data analytics will prove beneficial to public health. Just as a health team of specialists with advanced resources, the comparative possibility of legally consequential missed and misdiagnoses increases. Left to his or her own devices, the average physician cannot possibly be expected to analyze, interpret, or even consider the full range of data available through all this digital avenues. Yet failure to do so could leave providers vulnerable to complaints or even litigation.Failure to be robotic, in the near future, may be a career misstep for caregivers. Click To Tweet
Taking the robots to court
We can see that automation is impacting technical, rather than interpersonal, functions in the healthcare sector first. Medical labs and technicians are an obvious first place to bring Big Data and analytics technologies onto the stage, and their track record is already largely positive. But there is a long way to go yet from the human-robot hybrids we have in modern labs, and hospitals bolstered by Watson-like machines, and fully automated and integrated health systems. In the meantime, human providers and digital technologies remain cooperative, and default to the human for decision-making, and accountability.
Fee for service and defensive medicine combined to drive up prices and neglect (or even ignore) outcomes, compromising the medical profession and the doctor-patient relationship through an inflamed malpractice culture. Now, along with all manner of digitally-augmented tests, diagnostics systems, and data-gathering opportunities, caregivers must avail themselves of incalculable troves of data.
Getting an analytical review of all available data for each patient not just as an extension of monitoring and engagement, but as a step in diagnosing and screening, may well end up being more of the same excess that got us where we are today. Using data analytics to study malpractice data itself – rather than in support of population health data – providers may be better prepared to defend and protect themselves against malpractice suits, without necessarily providing the best possible preventative or treatment services.
The more we give providers to do, the more we ask them to risk and expose themselves in a system where malpractice lawsuits are almost accepted as a given, rather than just a possibility.
Getting there from here
Automation, abstractly, often gets conflated with business world thinking, offering relative cost-savings, productivity increases, efficiency, and precision. Industrial robots never tire, and never risk injury to themselves or others; the fatigue of human factory workers that necessitated things like strict shift enforcement and OSHA regulations becomes irrelevant.
In healthcare, these kinds of automated marginal efficiency and capacity gains meet the challenge of data at scale. Medicine has long been a matter not just of practice and skill, but of knowledge and facts. The more patients engage, share, and participate in their care, the better-equipped providers are to actually deliver optimized care. When data acquisition, storage, and transmission can be automated or augmented by smarter computers and robots, then one of healthcare’s biggest operational challenges can effectively be offloaded from the people, to the machines.
Until data management and analysis can be fully automated and taken more or less out of the hands of individual caregivers, it has the potential to swell into a liability nightmare as well as a distraction from the core humanitarian mission of medicine. When Hippocrates implored, “First, do no harm,” we tend to think he was only talking about patients; caregivers, in a litigious society, often end up harming themselves despite ability and intent. The only sure way to avoid malpractice threats currently is to cease practice.
The cumulative impact of technology is almost always taken as a positive, a net gain in potential and performance—the rising tide that raises all boats. But that is a retrospective phenomenon.
The reality is that the net benefits can only be felt if the cultural and systemic context in which the technology operates adjusts. Forcing an entire industry to utilize technology that compounds the real risk of medicine, exposure to malpractice, and blurs lines of liability all poses a threat to the perceived viability of careers in healthcare. In other words, no matter how great the technological advances may be in terms of life-saving potential, the legal threat to individual providers may drive the continued trend of burnout and abandoning caregiving roles.
Consumers are often hesitant to embrace driverless cars out of fear and distrust of the technology. If we can’t strike a balance in healthcare, automation may end up being our only source of caregivers.
Latest posts by Edgar Wilson (see all)
- Can AI in health IT save lives, yet simultaneously ruin your career? - April 5, 2017
- Culture clash? Healthcare with business characteristics - February 28, 2017
- How to change an unhealthy industry culture - January 25, 2017