On October 30, President Biden issued a landmark executive order on artificial intelligence (AI). While AI originated as a term for methods of simulating human intelligence in machines, today it is used as an umbrella term for a range of advanced processing and decision-making technologies that sometimes, but don’t always, take inspiration from biological life. These technologies are powerful and fast, but in their complexity, they are also frequently poorly understood, and every consequence of a given application is often difficult to predict. What we do know better than before, though, is that AI is not immune to the biases and limitations of its developers or its users.

It is sound policy, then, to ensure that AI usage has adequate safeguards in place. With respect to health care, the technology has the capacity to revolutionize the industry, but it also has the potential to exacerbate racism, sexism, and other structures of discrimination that are already too prevalent within the health care system. This commentary will explore AI’s risks to health equity as well as potential benefits. It will then discuss policy recommendations that build on Biden’s executive order so as to promote greater health equity and access to quality health care services.

The technology has the capacity to revolutionize the industry, but it also has the potential to exacerbate racism, sexism, and other structures of discrimination that are already too prevalent within the health care system.

AI in Health Care

Biden’s executive order (EO) is an important first step in AI regulation, and one that is long overdue. AI has gone largely unchecked by federal legislation since its introduction to health care in the 1970s. Its prevalence has also grown dramatically since then: today, 98 percent of health care organizations have adopted or are planning to adopt an AI strategy, and the FDA has approved 700 AI-enabled medical devices with plans to accelerate more AI approvals.

Enthusiasm for the use of AI in health care is far from unanimous among experts. Sixty-five percent of physicians are “very” or “somewhat” concerned about using AI to drive diagnosis and treatment decisions. There is reason for this level of ambivalence. While proponents tout that AI will improve diagnostic accuracy, improve access to care, and cut health care costs overall, critics are concerned with data security, job displacement, and physicians potentially becoming legally responsible for overriding AI’s diagnostic decisions.

The EO attempts to account for some of the concerns about AI with a few considerations specific to health care. The first is a specific directive to the Department of Health and Human Services (HHS) to develop a strategic plan for AI technology deployment, including research and discovery, drug and device safety, health care delivery and financing, and public health. This strategy will include specifications on maintaining quality of care, and evaluating the performance of AI-enabled health care tools. The EO also plans to “address and remove” algorithmic discrimination and “promote the deployment of equity principles” in AI-powered technology. These are welcome first steps: it is critical to be as clear-eyed about the risks AI usage poses to health equity as we are enthusiastic about its potential benefits.

AI’s Risks to Health Equity

Flaws in AI (such as weak data inputs, homogenous development teams, and limitations in its code) can have dangerous effects on health, especially for marginalized populations. Proactively mitigating these risks and developing careful regulation could avoid technologies that systematically misdiagnose, misinform, and improperly assess risk.

Risk of Bias in Clinical Trials and Foundational Data

The acceleration of AI use in health care comes against a backdrop of long-standing biases in medical research. Clinical trials have historically relied on data from white, male subjects. Due to underrepresentation in these studies, the results from these trials are often not safely generalizable to the population. People will experience symptoms, disease, and medication differently depending on their age, sex, gender, race, and medical history. One example can be found in asthma inhalers, such as albuterol, the most commonly prescribed asthma medication in the world. Sixty-seven percent of Puerto Rican patients and 47 percent of Black patients did not respond to albuterol treatment due to a genetic mutation that makes the medication less effective. Excluding these populations from clinical studies is not only detrimental to the health of these groups—Black and Puerto Rican patients are nearly three times more likely to die from an asthma attack than white patients—but it also leaves gaps in our understanding of disease that creates underlying biases in the health care system.

If the input data lacks diverse representation, the output will be biased, inaccurate, and potentially dangerous to the populations left out of the original dataset.

Many health care technologies use machine learning (ML), a branch of AI that is made up of a series of algorithms and programs that rely on this foundational medical data to “train” algorithms to classify data, make predictions, and identify key insights. Machine learning’s outputs will only be as good as the data it is trained on. If the input data lacks diverse representation, the output will be biased, inaccurate, and potentially dangerous to the populations left out of the original dataset. These biased outputs have already had real implications for marginalized groups: AI-powered technologies have failed to identify melanoma on Black skin compared to white skin and missed cases of liver disease more frequently in women than in men. Similar inequities can be found in AI chatbots. In 2022, Stanford ran a series of clinical questions through four AI tools; all four models used debunked race-adjusted information to answer questions about lung capacity and kidney function.

The Danger of AI Creating and Perpetuating False Information

Many of these chatbots, such as ChatGPT, are free, which could make them an accessible, valuable tool for low-income patients seeking medical information. However, AI chatbots have been found to create misinformation—that is, creating authoritatively presented, but ultimately false information—when there is not enough data available. This misinformation will occur when ML technology is prompted to generate content that goes beyond what it has learned from its input data. Ross Harper, founder and CEO of a company that uses AI for behavioral therapy, notes that, “There are probably a number of examples already today—and there will be more coming in the next year—where organizations are deploying large language models in a way which is actually not very safe.” This level of misinformation has the potential to cause real harm to patients who use these technologies to inform their medical decisions.

The Risk of AI Misuse Resulting in Inequitable Insurance Decisions

Some insurance providers have begun using AI to assess risk and make coverage decisions. Here, too, there is the potential for biased inputs to result in AI producing biased outcomes. One study found that a widely used algorithm systematically under-assigned the health risks of Black patients compared to white patients. The faulty algorithm used health costs as a datapoint to assess health needs. Less money is spent on Black patients due to their having less access to care, so the algorithm falsely assessed them to be healthier than their white counterparts. Black patients, then, had to be much more sick than white patients to receive appropriate care.

Less money is spent on Black patients due to their having less access to care, so the algorithm falsely assessed them to be healthier than their white counterparts.

Insurance companies are also using AI to predict discharge dates. One such case was highlighted at the U.S. Senate Subcommittee on Primary Health and Retirement Security. Christine Huberty, a supervising attorney at the Greater Wisconsin Agency on Aging Resources, testified about a cancer patient, Jim, who was suffering from pneumonia. Despite Jim’s inability to care for himself (including being unable to swallow, having unsafe oxygen levels, and requiring assistance for toileting, bathing, and dressing) being well-documented, the algorithm determined that Jim’s case only necessitated 17.2 days at a short-term rehabilitation facility; Jim’s insurance denied his care on the seventeenth day based on the algorithm’s predictive discharge date. He was forced to return home despite these risks, not because he had recovered, but because the cost of care became too prohibitive after his insurance’s denial.

His family helped him appeal the insurance company’s decision. On the second appeal, the decision was eventually overturned and coverage for a longer stay was approved, though his health suffered as a result of his early discharge. The case adjudication algorithm had incorrectly decided the extent at which the patient would receive care—and Jim is not alone. Huberty noted that, “Some reports show that only 1% of denials are appealed, with 75% of those overturned. Use of an algorithm… is churning out hundreds of thousands of incorrect denials that go largely unchallenged, leaving patients and their families to suffer.” Low-income patients and the elderly will continue to shoulder the brunt of these AI-powered mistakes if the technology’s use is not properly regulated.

AI’s Potential to Advance Health Equity

Despite these risks, there is also enormous potential for AI to overcome many health equity issues that are pervasive in health care today. AI has the ability to synthesize large amounts of data faster than humans, automate burdensome processes, and quickly make key decisions based on real-time changes. These capabilities benefit patients and doctors alike, including yielding potential health care cost savings, improving proactive care planning and early intervention, and avoiding physician burnout.

AI Could Decrease Health Care Costs in Ways That Benefit Marginalized Groups

Some estimates say that AI technologies could cut U.S. health care costs from $150 billion to as much as $360 billion per year. Where the actual savings falls in this range depends on how AI is used. Researchers have identified possible areas of adoption, including improving clinical operations, detecting future adverse events, optimizing operating rooms, and streamlining referrals. Insurance providers could use AI in claims management, automating prior authorization, and in relationship management with health care providers.

High health care costs disproportionately affect marginalized groups due to a confluence of socio-economic and environmental factors. These communities may face barriers such as limited access to transportation, quality education, and employment opportunities. Therefore, marginalized groups are more likely to be underinsured or lack health insurance altogether. Large-scale health care cost savings could improve care access, increase the frequency at which patients use health care services, decrease out-of-pocket costs for the patient, and improve health outcomes. Cost savings from AI can also be invested in proactive initiatives, such as health education and outreach programs that target marginalized communities.

However, due to the nature of the health care system in the United States as it stands, insurance companies, hospitals, and pharmaceutical companies are currently incentivized to take advantage of these savings, capitalize on the increased efficiency, and grow their own profit margins. Whether or not decreased health care costs will have downstream benefits to marginalized groups depends on how AI regulation and accountability takes form.

AI’s Precision Analysis Can be Used to Achieve Early Intervention

One key advantage of AI-powered tools is in how they allow for the analysis of medical images, such as X-rays, CT scans, and MRIs, with increased precision. This precision can lead to early intervention, which can prevent more severe health issues down the line.

For example, AI’s ability to precisely analyze medical images has the potential to improve early breast cancer detection by 23 percent. This is particularly relevant to Black women, who are dying of breast cancer 40 percent more often than white women. Though this disparity is most likely caused by a wide variety of economic, social, and behavioral factors, AI-enabled early intervention could help save the lives of those who seek preventative care, helping patients avoid costly health interventions down the line, such as chemotherapy or radiation therapy. Avoiding these costly treatments are especially poignant for marginalized groups, who are less likely to have adequate insurance to cover these treatments, less likely to have easy access to health care providers to provide these treatments, and less likely to have paid medical leave to receive and recover from these treatments.

AI and Clinician Burnout

Physician burnout is a critical health equity issue. It impacts the well-being of physicians but also impacts patient outcomes. One Mayo Clinic study found that residents who reported symptoms of burnout had higher rates of racial bias and unconscious prejudice. The study also noted that due to these implicit biases, “… black patients have greater distrust, have lower levels of adherence to treatment recommendations, and are less likely to follow up.”

Therefore, if AI can reduce burden on physicians, it has the potential to reduce racial bias. In a recent Medscape study, 60 percent of physicians who feel burned out indicate that it is due to too many bureaucratic tasks (i.e., charting and paperwork). Dr. Keith Sale, vice president and chief physician executive of ambulatory services at the University of Kansas Health System, states that, “The integration of AI and its consumption of healthcare data carries tremendous opportunities for improved patient care and outcomes and reduced physician and clinical team burnout.” With proper oversight, AI can be an important tool to easing administrative burdens, improving health outcomes, and advancing health equity.

Policy Recommendations and Considerations for Implementation

Though Biden’s EO addresses both health care and equity, its provisions are preliminary. The EO calls for task forces, strategic plans, and further research to be conducted. In addition to these measures, federal agencies should consider implementing guardrails that increase transparency, eradicate bias in AI datasets, and promote inclusion and diversity on development teams.

Remove Biases in Foundational Data

Biased data is one of the key issues with AI-powered health care tools. Policymakers must require audits that look for biased data during development and then regularly once the technology is in use. One example of how this could be done in practice is being promoted by the Coalition to End Racism in Clinical Algorithms in New York City. The coalition is lobbying for health systems to stop using AI that relies on datasets that have underlying biased assumptions (i.e., technology that underestimates Black patients’ lung capacity, incorrectly assesses patients’ ability to give birth vaginally after a cesarean section, and overestimates Black patients’ muscle mass).

Policymakers must also ensure that these biases do not find their way into new technologies. One policy lever that could do this is requiring an equity and fairness impact assessment and developing federal or state regulatory frameworks to oversee the assessments’ implementation. These assessments would evaluate the technology’s limitations, assess its efficacy on various groups, and require organizations to demonstrate efforts to address and rectify any bias in the technology.

Another way to remove these biases is to better regulate clinical trials. This type of regulation is not new: in 1993, Congress passed the National Institutes of Health (NIH) Revitalization Act, which requires research funded by the NIH to include more women and people of color in studies. The act has led to more women in clinical trials, but has not increased participation in people of color (less than 2 percent of more than 10,000 cancer trials funded by the NIH focused on racial or ethnic minorities and people of color only represent 2 to 16 percent of patients in clinical trials, despite making up about 39 percent of the population). To improve this, regulatory bodies should develop and enforce stricter trial requirements: for a technology to be approved, AI developers must demonstrate that their patient panels are appropriately representative of the populations that would benefit from the treatment.

Diversity and Physician Input during Development

AI and ML products and implementations reflect the values of its developers more starkly than do other technologies. Debra Matthews, associate director for research and programs at Johns Hopkins, says AI’s “…values are baked into the system. And those values have impact at scale. The people who create these systems are a very small subset of the human population. If their values are the only ones that are being baked in, that’s a problem.” For AI and ML to produce accurate and equitable results, the algorithms must be created by diverse teams. Lack of diversity on these development teams will result in a limited understanding of patients’ needs, biased algorithms, and missed opportunities for technological growth.

For AI and machine learning to produce accurate and equitable results, the algorithms must be created by diverse teams.

These technologies must be able to serve the unique needs of diverse populations and provide high-quality health care, regardless of patients’ income status or race. It is therefore vital for this technology to be built by diverse teams with variegated perspectives. One policy approach to ensure diverse development teams are diversity and inclusion mandates. These mandates will require that organizations demonstrate efforts to increase diversity and are promoting a culture of communication, collaboration, and inclusion. These mandates will further increase the technology’s transparency and enable both physicians and patients to decide whether the technology should be used in their case.

It is also important that physicians and health care professionals be active participants in the AI’s development and validation. Dr. Sale notes that “Physicians and healthcare professionals must be actively involved in the development and validation of AI tools to ensure they are driven by clinical guidelines and that they enhance rather than replace human expertise… AI will greatly expedite patient care, but human judgment will still need to determine if a final care plan is appropriate and in line with a patient’s condition and expectations.” To ensure clinician involvement, the FDA should consider making clinician involvement as a standard for AI-enabled medical tool’s approval.

Require Transparency and Patient Consent

Transparency about why, when, and how the technology is used is key to building patient and clinician trust in AI-powered processes. Transparency will also ensure that the technology was built with health equity in mind. When AI is being used in clinical settings, both the patient and the physician should have a clear view of what is happening “under the hood.” How does AI fit within the clinical workstream? How is it arriving at its recommendations? Does the physician have the power to override or edit its outputs? It is therefore important that legislators require a “glass-box approach.” Transparency must be a baseline requirement for any AI that is being used in clinical settings.

Patients must be fully informed about the extent to which AI is being used, and must also have provided informed consent for that use. Though gaining consent may become more complicated with the growth of AI’s complexity, the informed consent process is fundamental to health care. Physicians must understand the basics of the technology and inform their patients of AI’s use. This conversation, like any other conversation about treatment options, must include its benefits, its risks (including confidentiality and data privacy risks), and alternatives. Patients have a right to receive information, ask questions, and make well-informed decisions about their care.

Let’s Ensure AI Benefits All Patients

Biden’s EO is an important step in initiating much-needed AI regulation. However, this is just the first foray in what must become a broader legislative effort to promote health equity in AI-powered technology. Regulations that mandate equity and impact assessments, clinical trial regulation, diversity and inclusion mandates, and informed consent protocols for AI’s use in clinical settings are vital to ensuring the equitable use of this advancing technology. Each day, AI’s availability and capabilities are growing at an unprecedented pace. It is more critical now than ever to develop comprehensive regulations that harness AI’s power and ensure the technology positively impacts those who need it the most.