By: Erin Rutherford
Capitalizing on all the benefits and promise that innovative technologies have to offer to health care requires utilizing readily transmissible health data. As the health care industry becomes increasingly digitized, health care data is essential not only for improving diagnoses and treatments, but also for improving quality, efficiency, and accessibility of care. Achieving the latter three requires use of health data in ways that might be unexpected to individuals, and even to researchers and product developers, until after obtaining the data. This presents several risks including data misappropriation, new avenues of discrimination, and increased vulnerability to security threats. Failure to address those problems and others will cause individuals to lose trust in the health care system and become hesitant to disclose information even for their own treatment purposes, let alone for secondary purposes that may not benefit them directly but instead are intended to serve a larger interest of the health care system or society.
System-wide distrust and hesitation could stall innovation and more significantly, impair the integrity and function of the health care system. To reduce frustration related to privacy of health information and encourage technological innovation, the United States should implement a flexible privacy framework that prioritizes individual autonomy and allows practices to adapt as different contexts and individual interests require.
II. Current Privacy Models for Health Care Data
The Health Insurance Portability and Accountability Act (HIPAA) is the chief federal privacy law that pertains to the privacy of health information generated in a medical context. Initially, HIPAA addressed improving the design and sale of health insurance, and included provisions regarding the electronic processing of insurance claims and other types of medical information. As the need for improved privacy protections increased, HIPAA’s privacy regulations have evolved. Parts 160 and 164 of the Code of Federal Regulations contain the key privacy provisions.
These privacy regulations — known collectively as “the Privacy Rule”— require certain “covered entities” to maintain the confidentiality of defined types of “protected health information” (PHI). Part 160 defines PHI as “individually identifiable health information” maintained or transmitted in electronic media or any other form or medium. A covered entity under HIPAA includes: “a health plan, a health care clearinghouse, and a health care provider who transmits any health information in electric form . . .” In turn, a “business associate” is a person, business, or organization who, while not an employee of a covered entity, “creates, receives, maintains, or transmits protected health information for a function or activity . . . including claims processing or administration . . . or . . . provides” services such as legal, accounting, or management services on behalf of the covered entity. The Privacy Rule prohibits covered entities from using or disclosing PHI unless one of the exceptions in part 164 apply. These exceptions include:
- PHI can be disclosed to the individual or the individual’s personal representative
- PHI can be used for treatment, payment, or health care operations.
- PHI can be disclosed where the entity receives a more specific valid authorization
- A specified subset of PHI can be disclosed without written authorization in certain situations after giving the patient an opportunity to object.
- An individual’s PHI can be disclosed without his or her authorization in exceptional circumstances (e.g. where the disclosure is required by law).
- A limited data set that excludes most identifying information can be disclosed for use in public health, research, and operations.
- Covered entities are also permitted to disclose PHI incident to uses or disclosures otherwise permitted or required so long as the covered entity has followed the standards governing the minimum necessary disclosure of information and has put in place proper administrative, physical, and technical safeguards.
If a covered entity does transmit PHI, it must adhere to the “minimum necessary” standard. Under this standard, dissemination is limited to the amount necessary to accomplish the intended purpose of the use, disclosure, or request.
Additionally, HIPAA establishes a standard for when the health information is considered deidentified, at which point it loses its legal protections and can be disseminated for secondary purposes. Health information is considered deidentified when there is “no reasonable basis” for believing that the patient can be identified from the data. Organizations can meet this standard through two different methods: expert determination or safe harbor. Expert determination entails trained experts using statistical methods to determine that there would be a “very small risk” of reidentification. The safe harbor method is the more commonly used method and entails the removal of 18 types of patient identifiers. Limited data sets only require removing 16 of those 18 types to achieve deidentified status. Notably, there are no technological methods with the capability to deidentify free form text.
The third exception stated above (the valid authorization exception) is relevant for secondary uses of identifiable information. To obtain consent from the patient for use of her identifiable health information, an organization must provide details regarding specifying third party interests and involvement, intended use of the data, and timeline of its use. While consent may seem to offer sufficient privacy protections, current privacy models are expecting consent, and the individual, to do too much.
Giving consent is often thought to mean that the individual makes a well-informed choice, however, for several reasons it falls short of legitimately ensuring individual autonomy. The first reason is that individuals rarely understand what they are consenting to and what implications of that consent will follow. Several factors can lead to a failure to understand. People often do not actually read the form in front of them, particularly when an entity presents it in an electronic format (as often is the case in this digital era, with ever-changing privacy policies and terms of agreement). Furthermore, it is unlikely an individual who does read the long, complex form would truly understand the terms. Even if an entity makes an effort to write a consent form with plain language in the hopes of making it easy to understand, that often means that important information is left out or distilled to the point it is not completely accurate.
A second reason consent practices fall short of advancing individual autonomy is that they fail to provide meaningful choice. When it comes to health care, individuals are usually giving consent to a party with much more bargaining power under a take it or leave it circumstance. Providers or entities often request consent shortly before a course of treatment, diagnostic test, or use of a product or app. At this moment, a patient is unlikely to decline and start all over with a search for an alternative provider. Additionally, consent for treatment accompanies consent for secondary use, which can lead a patient to believe there is no option to refuse the secondary use without refusing the necessary care.
It is certainly possible, though, that if the patient were to get an accurate and thorough description of the secondary use, one may support that use and willingly consent. But this illustrates the need for consent practices that vary according to the way a party is seeking to obtain the consent, and the interest behind acquiring health data for a secondary purpose. For example, if an entity is requesting a broad consent to store data for uses not yet known, it is critical that the individual understands the consequences of long-term data storage, such as (1) the data’s eventual transfer to other parties, (2) whether the entity will notify the individual of any new party who has access to the information, and (3) if so, what this new party intends to use the information for. Without an understanding of short-term and long-term risks and benefits, an individual cannot truly make an informed decision and therefore lacks autonomy. Relying on consent in situations where the patient might understand the terms of agreement, but feels it is difficult to refuse, vitiates if not destroys autonomy.
III. Privacy Model Proposal
Realizing any of the potential benefits that digitized health care data presents depends on a foundation of individual trust in the system. If individuals do not believe that the people who they entrust their intimate health information to will protect it, they will hesitate to share that information, even for their own care and treatment needs, let alone for secondary purposes that may not directly benefit them. To establish this trust, individual autonomy must be the priority in any privacy model.
Currently, autonomy in health information privacy decisions hinges on privacy models that are too generalized and outdated for the digital health era. This places too much responsibility on the individuals and leaves them without an actual choice in what and how much information they share. The following is a proposal for a federal hierarchal framework that should apply to all entities who deal with health data:
- Health care data shall only be collected and used with the individual’s permission and fully informed consent.
- The collections, disclosures, and dissemination of health care data may be allowed only where such action furthers a legitimate societal interest, and does not interfere with number one.
- Health care data may be used to benefit the collector, discloser, or disseminator of such data so long as it does not interfere with number one and number two.
A. Health care data shall only be collected and used with the individual’s permission and full informed consent
Individual autonomy in the health care industry means the individual controls what happens with their health information according to their subjective interests and privacy values. Ensuring the privacy interests of the individual fosters the sense of trust that is critical for the success of the health care system’s primary and secondary interests.
Some people may enjoy receiving customized advertisements; others may feel such advertisements violate their privacy. Nonetheless, the decision should be up to the individual. The individual needs to be able to make this decision with trust that entities will use their information according to their wishes only. They also need to be able to make this choice from a place of control, i.e., feeling like they have an actual choice in the matter. Current privacy models cannot account for these subjective preferences and foster a sense of trust or autonomy. If individuals trust the entity with which they are sharing their information, they will share information honestly and completely. This not only improves their own quality of care but builds the foundation for any downstream innovative uses as well.
B. The collections, disclosures, and dissemination of health care data may be allowed only where such action furthers a legitimate societal interest and does not interfere with number one of the proposed framework.
Innovative use of digitized health data can reform the health care industry in numerous ways, offering benefits at the individual and the societal level. Societal benefits, however, are dependent upon individual trust in the system. It is entirely possible, for example, that people would be more than willing to share identifiable information if it led to a real possibility that researchers could finally cure a devastating disease, such as Alzheimer’s. For this to work in a way that does not infringe upon an individual’s interests, the individual needs a thorough understanding as to the benefits of secondary use as well as the risks involved. In accordance with HIPAA’s data minimization principle, establishing this understanding would require an explanation of all the information to which a third party would have access and the degree of identifiability necessary for the intended use. Requiring a comprehensive explanation of both the benefits and the risks ensures that the secondary use serves a purpose that the individual finds worthy enough to warrant lessening their personal privacy protections. Without a thorough understanding of pros and cons, the individual cannot weigh them in a way that drives an informed decision.
Ensuring the individual’s thorough understanding could create a burden on the party wishing to obtain the data, but this added burden puts more responsibility on the party seeking the information to do their own weighing of pros and cons regarding their interests. It is a good thing, however, for the entity to carefully consider how it wants to ensure that it meets the individual’s needs in a way that also allows it to meet its own needs. Current privacy models place all this responsibility on the individual, yet the data collectors are better suited to consider the costs and benefits in a meaningful way and adjust accordingly. Adequate patient and consumer education is critical in guaranteeing that even in the event an individual’s health information aids in achieving a more attenuated benefit, that benefit is still within the individual’s interest and therefore does not violate the first rule of this proposed privacy model. Utilizing this framework would require prescriptive guidelines regarding the appropriate degree of minimization, identifiability, and notice/informed consent. These guidelines will vary according to the context. More downstream effects and uses involved, or ambiguous uses, would entitle the individual to receive a more granular level of information about the potential uses and the types of data they require.
C. Health care data may be used to benefit the collector, discloser, or disseminator of such data so long as it does not interfere with number one and number two of the proposed framework.
Individual preferences regarding collecting, sharing, and using their health information could be (1) no notice or consent needed, (2) consent but no notice needed, (3) notice and consent needed before any action commences, or (4) somewhere else along that spectrum. Some individuals, for example, may be willing to relinquish health information to provide the collector or disseminator with some kind of purely internal benefit to the collector or disseminator. Others might not.
For entities to account for these differences, it is up to the government to put forth prescriptive rules that specify which contexts and uses require what kind of protections for collection, use, and transmission of health information. Data uses intended only for an internal benefit, such as to improve marketing strategies, would be permissible if the person or entity obtaining consent to use the data made those intentions clear to the individual. This secondary use would require full disclosure of the type of data and its degree of identifiability the entity is requesting. Further, the entity would need to separate the processes used to obtain consent for an internally beneficial secondary use, and consent for the treatment or use of the drug or device. Other secondary uses, such as product development, may blur the lines of an internal benefit and a legitimate societal interest, illustrating the need for governmental clarifications as to what may qualify as a legitimate societal interest. Preventing coercive or underhanded acquisition of an individual’s health information through increased disclosure and consent requirements ensures trust in the health care system. This is particularly necessary for uses that will benefit neither the individual, nor society at large.
For any given use of health data, an entity must consider and determine what types of health data it needs to meet the anticipated purpose. From there, the necessary degree of identifiability can be determined. As the degree of identifiability increases, the entity must utilize more stringent privacy protections. Similarly, as the degree of medical necessity increases, the entity must ensure that it does not condition treatment on surrendering health information. Having these prescriptive rules within a flexible framework is crucial for innovators to achieve their goals without infringing on individual interests and knowing what kind of protective measures are required will relieve the innovators of some burden and confusion. Furthermore, it ensures that organizations are using data efficiently and in a way that promotes cultural development and prioritizes individual autonomy. Innovation without proper privacy protections is of no benefit at all if it comes at the cost of trust in the system. Individual privacy interests vary, as do opinions on the types of secondary uses that warrant relaxing those privacy interests. Privacy models regarding health care data need to have the ability to adapt to these differences to promote societal development in a way that prioritizes and accommodates individual interests, thereby protecting the primary purposes of the health care industry and facilitating its improvement.
A flexible privacy framework that prioritizes individual autonomy and allows practices to adapt to different contexts and interests will both reduce frustration related to privacy of health information and encourage technological development. Current laws are not capable of striking this balance. Many sources of health information are unregulated and even the regulations in place depend on consent practices that fall short of truly protecting privacy interests and advancing individual autonomy. Digitized health data holds tremendous innovative potential to both improve individual care experiences as well as achieve larger health care goals, such as decreased costs and increased accessibility. However, if this law and policymakers do not harness this potential in a way that preserves individual trust none of these potential benefits will come to fruition and the health system will crumble. Privacy models need to allow innovators to create revolutionary solutions without also allowing entities to violate privacy interests and create new avenues for discrimination.
 Charlotte Tschider, AI’s Legitimate Interest: Towards A Public Benefit Privacy Model, 21 Houston J. Health L. & Pol’y 125, 148 (2020).
 Hall et al., Health Care Law & Ethics 124-27 (Rachel E. Barkow et al. eds., 9th ed. 2018).
 45 CFR § 160.103.
 45 CFR § 164.
 Mark A. Rothstein, Is Deidentification Sufficient to Protect Health Privacy in Research?, 10 The Am. J. of Bioethics 3, 4 (2010).
 Tschider, supra note 1, at 148.
 Daniel J. Solove, Privacy Self-Management and the Consent Dilemma, 126 Harv. L. Rev. 1880, 1884 (2013).
 Charlotte Tschider, The Consent Myth: Improving Choice for Patients of the Future, 96 Wash. L. Rev. 1505, 1517 (2018).
 Solove, supra note 15 at 1888-89.
 Tschider, supra note 1, at 155.
 Tschider, supra note 16, at 1522.
 Id. at 1519.
 See generally Tschider, supra note 1 (discussing the idea of weighing legitimate interests).
 Rothstein, supra note 9 at 8.
 Arellano et al., supra note 9 at 116.