How Google Should Use Your Health Data

Reuters
November 30, 2019 Topic: Technology Region: Americas Blog Brand: The Buzz Tags: Health DataPrivacyGoogleGoogle HealthPublic HealthBig Data

How Google Should Use Your Health Data

The future is here.

When news broke recently that Google and Catholic health care provider Ascension were working together on data processing, it caused a stir. A whistleblower’s allegation of minor deviations from federal privacy regulations helped make the story newsworthy. But after the initial shock, a return to calm is ongoing as various actors unpack the grab bag of risks and rewards that flow from processing health data.

We are a long way from having a consumer-centered, privacy-protective health care system, but there is no way to get the substantial benefits of data processing in the health care field without embracing some privacy-related risks. How should we do that?

The rewards are easier to identify and comprehend. Recode found a good illustration through a professor of biomedical informatics at Columbia University. His research using health data turned up an interaction between an antibiotic and a heartburn medication that can lead to potentially dangerous heart arrhythmias. In hundreds of ways, processing health data in this way may turn up more and better information about diseases, cures, side effects, and more.

But do those benefits overcome an individual’s right to control what happens with data about his or her own body and health? The status quo in health information makes the question quaint. As the Recode report also found, health data is processed and shared on a large scale.

But companies share health data subject to restrictions. For example, Google may not use it in its advertising programs. (Feel your blood pressure going down?) The company says that its employees are monitored to reduce the chance that one might exploit access to others’ personal information. Anonymized health data can often be used productively, so many benefits are available while risks to patient privacy are mitigated.

But advances in “de-anonymization” make anonymization a moving target. There is always a risk that personal information will be reassembled and revealed or misused to harm patients. Along the same lines, simple control of data is never guaranteed. The potential for data breaches threatens privacy particularly acutely in the health area. That risk exists wherever data are. Any multiplication of risk from moving data among processors (i.e., Ascension to Google) is mitigated, one hopes, by Google’s capacities in data security.

The fact of a whistle-blower, such as there is, helps solve a problem that has nagged the datasphere for some time. It once was assumed that data breaches and similar incidents would be very difficult or impossible to discover. Thus, the argument went, ordinary market and legal sanctions wouldn’t pertain in this environment. The implicit remedy was heavier regulation or draconian punishments.

We can now add whistle-blowing to the arsenal of tools that public and private actors have to discover and root out wrongdoing and error in the world of personal data. This is a new world, but not one that requires throwing out customary legal and social responses to problems.

That brings us back to the question we skipped over earlier. What happened to the individual’s right to control what happens with data about his or her own body and health?

In the not-too-distant past, a web of protections protected health information and patient privacy. Medical ethics required privacy. “What I may see or hear in the course of the treatment . . . I will keep to myself,” said the Hippocratic oath’s classical version. “I will respect the privacy of my patients, for their problems are not disclosed to me that the world may know,” goes the modern oath.

Market sanctions and conventional legal protections backed this up. Violating privacy is bad for business, and it can lead to lawsuits for medical malpractice and contract violation. The privacy torts provide further background legal protections.

But in recent decades the dominance of third-party payers in the health care market has been matched by the growth of large “health systems.” Doctors and nurses are well below the pay grades of those who establish these institutions’ information practices and privacy policies. There is a large, uniform market and not very much in the way of boutique providers with bespoke information practices. The market and common law forces that consumers would use to balance privacy interests against efficiency and medical discovery are quite attenuated.

Into this breach stepped the Health Insurance Portability and Accountability Act (HIPAA) in 1996. In the eyes of some, Congress would be the “big dog” that protects patient privacy from the depredations of third-party payers and the big health systems. Never mind that one of the biggest third-party payers is the US federal government itself.

But Congress did not define or protect privacy in HIPAA. It punted to the Department of Health and Human Services (HHS), asking the agency to both determine people’s privacy interests and implement regulations protecting them. (The rulemaking schedule was a masterwork in responsibility avoidance.)

What HHS produced was a system of federally approved protocols for health data. HIPAA allows data sharing with many entities. It is possible to seek deviations from the federal standard. One might get private treatment by paying for services out of pocket in full. But one is no more likely to get special private treatment of data about oneself after HIPAA than before. Large-scale health data sharing is institutionalized, so large business enterprises such as Google and Ascension can share data among themselves to the consternation of the hapless patient and consumer.

There are ambitious projects out there to bring personal data of all types within the control of individuals. Wish them well. Whether there is direct consumer control of data or a return to a health care system with market-guided data practices, it is not obvious that the results would be all that different. The benefits of data sharing in the health field are substantial and clear. The costs — largely denominated in risk and uncertainty — are fairly well mitigated.

But a system that gives patients more control, directly over their data or via diverse markets and common law remedies, would be better. The sense and reality of a consumer and patient role in health privacy would help assure people that they could see a doctor without submitting themselves to the unalterable, impersonal data maw we have today.

When it issued the HIPAA regulations, HHS said that privacy is “necessary for the effective delivery of health care” because people would avoid medical treatment without it. It’s a believable claim, but I’m dubious — and aware of no study to show — that the HIPAA regulations relieved consumer concerns and improved health care outcomes by increasing patients’ participation in health care. Maybe something else would work.

This article by Jim Harper first appeared in 2019 on the AEI Ideas blog.

Image: Reuters.