Sections

Commentary

How ONC can strengthen its HTI-1 rule to ensure transparency, fairness, and equity in AI

Niam Yaraghi, Azizi A. Seixas, and
Azizi A. Seixas Associate Professor and Interim Chair - Department of Informatics and Health Data Science, Miller School of Medicine, University of Miami
Ferdinand Zizi
Ferdinand Zizi Program Director - Department of Informatics and Health Data Science, Miller School of Medicine, University of Miami

June 26, 2024


  • The ONC’s recently-finalized final rule on Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) may be insufficient to address various biases that can be entrenched in AI systems.
  • To address these challenges, as well as others that may arise through the rule’s implementation, the ONC should focus on standardizing data, audit for algorithmic explainability and transparency, provide standardized taxonomies, and conduct post-implementation performance and fairness audits.
  • Overall, these recommendations provide a guide for the ONC to strike an appropriate balance between the rapid algorithms and ethical considerations.
A stethoscope over a graph
Editor's note:

This article was previously published on Health Affairs on June 5, 2024.

The Office of the National Coordinator for Health Information Technology (ONC) has recently finalized the Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) final rule. Established under the 21st Century Cures Act, the new HTI-1 final rule mandates updated transparency mechanisms in health information technologies, specifically in the application of artificial intelligence (AI).

This regulation not only strengthens interoperability and clinical support, it also ensures AI is used responsibly. It emphasizes unbiased decision making, patient safety, and health equity. And by requiring access to large, reliable data sets, the rule promises to significantly boost the development and refinement of AI technologies in health care, ensuring that they are both effective and equitable. This move is a vital step toward successful development of responsible AI algorithms.

However, despite significant progress toward responsible AI development and use in health care, the final ONC rule has some limitations.

We contend that the current provisions of the rule may be insufficient to address various biases that can be entrenched and perpetuated in AI systems through methods such as transfer learning and other machine learning techniques. Moreover, challenges may arise in the implementation and enforcement of the rule, highlighting the necessity for continual monitoring to ensure adherence.

Policymakers have an imperative to balance the advantages and potential pitfalls of AI in health care; this requires a holistic and multidisciplinary approach that prioritizes accountability, transparency, and fairness in the development of AI algorithms. Embracing such an approach enables health systems to harmonize democratized AI usage with robust oversight, thereby optimizing health care outcomes. To that end, with this article, we present four specific policy recommendations and outline an actionable approach to implement them.

1. Focus on standardizing social, behavioral, and environmental data

To leverage AI’s full potential, developers must be able to access expansive data sets across diverse domains. The integration of information technologies into our daily lives allows us to gather an unprecedented wealth of new data from previously inaccessible sources, including geolocation data, biometrics, and more. Such access is particularly crucial in health care, where outcomes are influenced by a multitude of factors such as various social, behavioral, and environmental indicators.

AI’s ability to consider and integrate these diverse and seemingly unrelated data points can offer a more holistic and accurate understanding of health outcomes, surpassing the limitations of traditional statistical methods. It’s essential, therefore, to prioritize the collection, harmonization, and exchange of data among various stakeholders. And just as important, this process should be approached with a keen awareness of the delicate balance among ethical considerations, privacy concerns, and the performance of AI algorithms.

For these reasons, we believe the ONC should continue to improve interoperability standards to facilitate the exchange and accessibility of medical data between patients and providers. However, it’s equally important for the ONC to recognize the need for enhanced and broadened standards so that they go beyond medical data and also cover social, behavioral, and environmental determinants of health data. Initiatives such as the Gravity Project—which advocate for and contribute to the development of these standards—are helping ensure a more inclusive and fair approach in AI-driven health care solutions.

2. Audit for explainability and transparency of AI algorithms

The ONC should also play a pivotal role in advancing the explainability and transparency of AI algorithms, particularly in complex algorithms such as deep neural networks. This can be achieved by either directly conducting audits or mandating developers to employ various techniques aimed at explaining how their AI algorithms arrive at predictions or classifications. Among these potential techniques are local interpretable model-agnostic explanations and partial dependence plots.

Such measures are crucial for ensuring fairness in AI. By applying methodologies from the field of explainable AI, developers can scrutinize how different demographic features, including race and gender, influence AI outcomes. With that information, they can then address potential biases. We strongly recommend that the ONC make it mandatory for developers to not only use these techniques but also disclose the results for their AI products. This would be a significant step in fostering a more accountable and equitable AI ecosystem in health care. Furthermore, we strongly advise the ONC to require AI developers to integrate existing AI fairness checklists into their development processes, ensuring adherence to a specific and actionable framework throughout the AI product lifecycle.

3. Provide standardized definitions and data sets

The rule’s ambiguity extends to crucial concepts such as “fairness” by failing to define the term. One might interpret fairness as ensuring equal performance by a system, program, or provider across diverse demographics, such as race, gender, or income levels. However, in reality, these categories are constantly intersecting—a fact that multiplies their complexity many times over. For instance, differentiating performance between White and Black individuals is straightforward, but adding variables such as gender, sexual orientation, and income status quickly creates more than a thousand subgroups, making meaningful comparisons challenging, if not impossible.

This lack of clarity not only leaves the interpretation of fairness to the developers; it also places the onus of ensuring and testing for fairness on them. This approach would be comparable to allowing automobile manufacturers to define and test their own safety standards. And yet, that’s how it works today: Each system’s fairness metrics vary based on developers’ chosen definitions and methodologies. Such flexibility might be intentional, but, considering the wide range of AI algorithms and applications, we strongly advocate for uniform definitions and testing procedures. This approach is essential to enabling meaningful comparisons.

We also believe that the ONC should mandate that AI developers ensure the population health validity of their products. Developers should also be required to rigorously assess the sensitivity, accuracy, and precision of their algorithms in capturing intended outcomes across diverse population groups. For each algorithm, these findings should be transparently reported through a standardized white-label format.

The ONC should also invest significantly in developing uniform, comparable data sets for industrywide use in performance measurement. This concept is similar to ImageNet. Created in 2006 by Fei-Fei Li and her team at Princeton University, ImageNet revolutionized computer vision and AI, particularly deep learning, due to its unprecedented scale and diversity. As a large, meticulously labeled image database, it provided a robust platform for developing and benchmarking advanced machine learning models. Without a standardized data set for testing and comparison, achieving fairness in AI is highly improbable. That’s because the biases inherent in data and AI development processes would otherwise likely be replicated in developers’ data sets and testing methods, rendering their fairness test results not only biased but also incomparable.

4. Audit for performance and fairness post-implementation

The final stage of AI development involves the tasks wherein AI algorithms provide specific recommendations or prescriptions for individuals, based on patterns identified in the training data set. This stage is pivotal, as there is a risk that the AI system’s recommendations, driven by historical data, may not generalize to current real-world situations. This issue is further complicated when AI algorithms are applied in contexts where the new data significantly differs from the training data, leading to potential inaccuracies or misapplications. Consider, for example, AI models that are trained to predict the risk of opioid dependency based on data gathered from adults; if they are used to predict the risk of such dependency among adolescents, the results may be significantly less accurate.

To mitigate these risks, the ONC should enforce requirements for AI developers to clearly define the conditions under which their algorithms can be reliably generalized, and their recommendations trusted. This would ensure that AI prescriptions are relevant and applicable to the specific contexts in which they are employed. It is crucial that the ONC continues to conduct audits on the performance of these AI algorithms in real-world implementations, focusing on various metrics, including accuracy and fairness. Such ongoing assessments will help ensure that the AI systems remain effective and equitable in their practical applications, adapting to the evolving real-world data and contexts. This proactive approach by the ONC will be instrumental in maintaining the integrity and utility of AI-driven health care solutions.

Achieving a balance

It is imperative to balance the rapid development of AI algorithms with ethical considerations, ensuring that the pursuit of speed does not compromise fairness. Achieving this balance requires a vast data ecosystem, akin to a real-world data marketplace, where various data providers, beyond just medical and health care providers, can contribute their assets.

Initiatives by companies such as Databricks and Snowflake exemplify this approach in the private sector. This strategy will enable the creation of a large, de-identified, privacy-preserving data set. Given the diverse range of available demographic data, such a data set will be instrumental in accelerating AI tool development and crucial for assessing and mitigating biases.

We commend the ONC for its pivotal first step in establishing a regulatory framework that promotes transparency and fairness in health care AI. However, additional measures are necessary.

Key among these is the standardization of data sources used in training and developing these algorithms. It is imperative that there are explicit and clearer transparency requirements. Moreover, the provision of standardized testing data, alongside more precise definitions, is essential. Equally important is the continuation of audits throughout these algorithms’ post-implementation phase. These enhancements are crucial for managing the complex landscape of health care AI.

Authors

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).