This week, the U.S. Department of Health and Human Services (HHS) took its first step in actually regulating the use of artificial intelligence (AI) in health care. Through its Office of the National Coordinator for Health Information Technology (ONC), the agency finalized its “Health Data, Technology, and Interoperability” rule, which among other things establishes a framework for the regulation of AI and other predictive algorithms that are incorporated into electronic health record (EHR) systems used by health care providers.
Scope of Rule’s AI Provisions
The ONC oversees the Health IT Certification Program, under which developers of health information technology (HIT) can seek to have their software certified as meeting certain criteria. While participation in the program is voluntary, EHR developers have an incentive to have their software certified because certain health care providers receive higher Medicare (and sometimes Medicaid) payment rates by using certified EHRs. ONC-certified HIT supports the care delivered by more than 96 percent of hospitals and 78 percent of office-based physicians around the country.1
Predictive algorithms used by health care providers that are not offered as part of certified HIT are outside the regulation’s scope. Therefore, large language models (LLMs) like ChatGPT would only be subject to the rule to the extent they are offered by a developer of certified HIT. Similarly, AI used by health insurers to determine whether to approve a certain service—which has been the subject of recent litigation—is not subject to the rule.
The final rule imposes requirements on the use of predictive Decision Support Interventions (DSI) as part of certified HIT. Predictive DSI is “technology that supports decision-making based on algorithms or models that derive relationships from training data and then produces an output that results in prediction, classification, recommendation, evaluation, or analysis”; in short, one that employs AI. For example, according to the ONC, predictive DSI includes models, trained based on relationships observed in a large data set, that predict whether an image contains a malignant tumor—a common way AI is being used in radiology.
If certified HIT does use predictive DSI, the HIT developer must make available to the software users detailed information about the predictive DSI, including:
- The purpose of the intervention;
- Funding sources for the intervention’s development;
- Exclusion and inclusion criteria that influenced the training data set;
- The process used to ensure fairness in development of the intervention; and
- A description of the external validation process.
These transparency requirements are intended to ensure that health systems, clinicians and other users of the predictive DSI understand the software that is being made available to them and that it is aligned with other government actions aimed at increasing transparency with regard to AI tools.
Further, under the intervention risk management requirements, the predictive DSI must be subject to an analysis of potential risks and adverse impacts associated with its validity, reliability, robustness, fairness, intelligibility, safety, security and privacy; practices to mitigate risks; and policies and implemented controls for governance (including how data are acquired, managed and used).
The ONC indicates that the purpose of these requirements is to promote the development of algorithms that are fair, appropriate, valid, effective and safe (FAVES) and to ensure that the AI used by health care providers can be trusted. The agency also reiterates prior concerns about the potential for AI to promote biased medical decision-making and the need for the rule to promote health equity. It believes that the increased transparency of models and practices will result in the selection and use of fairer models. For instance, the ONC notes that biased models “exhibit higher sensitivity or specificity for some groups than others and are likely to deprioritize treatment for certain groups. They are also likely to recommend inappropriate treatment for certain groups[,] resulting in limited benefit and potential harm to those certain groups relative to those for whom the models perform well.” In issuing the rule, the ONC dismissed concerns from some commentators that the regulation of AI is premature and will stifle innovation.
Implications for AI and the Health Care Industry
As most recently highlighted in our newsletter on Biden’s expansive Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, while the federal government has noted the promise of AI, it has at the same time raised concerns over the lack of transparency regarding AI’s data, models and security and its potential for perpetuating biases and inequities. The ONC’s rule is the first health care-focused regulation targeted at this concern—and specifically aimed at providers’ use of AI.
Right now, this rule impacts certified HIT developers more than health care providers, who can use unregulated AI tools outside of certified HIT. Health care providers should nonetheless pay attention to the requirements being imposed by the ONC on predictive DSI, as they may foreshadow how HHS may regulate health care providers’ use of AI through other mechanisms, such as Medicare Conditions of Participation. AI is being widely adopted by health care providers to assist with everything from diagnosing conditions and performing administrative tasks to documenting patient encounters and communicating with patients. As AI increasingly becomes more accurate, health care providers are more likely to rely on it daily.
HHS has another proposed rule aimed in part at AI. Under proposed regulations implemented in Section 1557 of the Affordable Care Act (ACA), which prohibits discrimination on the basis of race, color, national origin, sex, age or disability in certain health care programs and activities, a covered entity is prohibited from discriminating through the “use of clinical algorithms in its decision-making.”2 In this proposed rule, HHS notes that “[w]hile covered entities are not liable for clinical algorithms that they did not develop, they may be held liable under this provision for their decisions made in reliance on clinical algorithms.”
Accordingly, health care providers should ensure they understand how AI is being used within their organization and the data being used to develop and test the model and the security of the AI to enable them to meet any transparency and governance requirements and demonstrate that their AI has been tested for bias and discrimination.
Other Notable Changes
In addition to the AI-specific regulations, the rule includes other updates to the ONC Health IT Certification Program, as well as new provisions related to information blocking, designed to drive and improve interoperability and the secure exchange of health information.
Under the final rule, developers of certified HIT are now subject to a new Condition of Certification (the “Insights Condition”), under which they will have to report certain metrics in order to address information gaps in the HIT marketplace and provide insights on the use of certified HIT. The rule also adopts United States Core Data for Interoperability (USCDI) Version 3 as the new baseline standard within the Health IT Certification Program as of January 1, 2026.
With respect to information blocking, the ONC has narrowed the scope of the definition of “health IT developer of certified health IT,” which will result in fewer health care providers falling within this definition and hence being subject to the high penalties that come from falling within that definition. Further, the ONC has expanded the exceptions to practices that constitute information blocking. These exceptions are the “infeasibility exception” and a new exception for entities participating in the Trusted Exchange Framework and Common Agreement (TEFCA).
2 Nondiscrimination in Health Programs and Activities, 87 FR 47824-01.