Tue. Nov 5th, 2024

AT A GATHERING of health equity leaders this month, one of the buzziest topics of the day focused on the upsides and risks of incorporating artificial intelligence into health fields. It may have serious potential for diagnoses and patient experiences, experts said, but it also looks like a technological Wild West.

“Racial biases, as we know, are built into the AI that is being used for patient care right now, and there’s no clear responsibility nor accountability to ensure that AI doesn’t harm diverse peoples,” said Sheila Och, chief engagement and equity officer at the Lowell Community Health Center, in introducing the Healthy Equity Trends Summit panel. “We need to focus on equity in every part of AI development and implementation if we are to correct the course we are on.”

A panel moderated by Rahsaan Hall, president and CEO of the Urban League of Eastern Massachusetts, considered the practical applications and equity red flags of incorporating artificial intelligence into regular practice.

“As a primary care doc, being able to sit down and chat with my machine and pump out a proposed diagnosis that may accelerate the care for my patients – that is a dream,” said Renee Crichlow, chief medical officer of the Codman Square Health Center. “That’s the dream, but that’s not really where I think this is going to have the biggest impact on our patients.”

Instead, Crichlow said, the boon will come from being able to shift resources within a clinic because of the way health care providers could use AI in operations rather than on a patient-by-patient diagnosis basis.

During the patient intake process, “we’re just going to have their AI talk to our AI, and that’s a 30 percent decrease in my expenses in our clinic,” Crichlow said. “And you know what I’m going to do with that money? I’m going to put it right into patient care.”

Marzyeh Ghassemi, an associate professor at MIT in electrical engineering and computer science, walked through a possible use for AI tools: A patient might come into a hospital with trouble breathing and get a chest X-ray, but with no doctor available for two hours, the idea is to send the patient home if she is healthy.

Publicly available chest X-ray data already includes over 700,000 images, Ghassemi said, so her lab trained a neural network to predict “no finding” on any given chest X-ray. Simply put, they taught the machine to recognize troubling possible results on the scans, so that it could identify what a scan looks for on healthy lungs and make a recommendation that the patient is healthy. 

“There are many, many, many papers that you can read that will claim state-of-the-art clinical AI performing at or above humans,” Ghassemi said. The question, she said, is what happens now. Many papers, including hers, have reported a high level of accuracy and predictability with AI tools, so “should we be deploying them? And to answer that question, I’m going to say people are deploying them. So it’s not really a question. This is actually happening.”

These studies and this tech can make it into hospitals through a number of channels. Maybe the Food and Drug Administration reviews it and approves it for broad distribution. Maybe an institutional review board will approve it. Or a hospital could use it as an administrative aid with neither FDA nor institutional review board approval. 

Even state-of-the-art models have their biases, Ghassemi noted. In auditing the lung X-ray model, he found the program was more likely to falsely clear female patients, young patients, Black patients, and patients on Medicaid insurance.

“If you exist in an intersectional identity – so if you’re a Black female patient or a Hispanic female patient – then you’re doing significantly worse than if you’re part of a larger aggregated group,” she said. “And you might be saying, that’s a very specific example, surely this is not a bigger problem. In fact, it is.”

Microsoft and the health system Epic have partnered with Open AI – which runs the well-known Chat GPT program – and already drafted more than 150,000 medical notes without any FDA approval or institutional review board oversight. Demonstrating potential bias in the system, when Ghassemi changed the race of a “belligerent and violent” patient from White to Black, the Open AI model shifted from completing the note with “sent to hospital” to “sent to prison.”

New model and algorithm development must be focused, she said, on making sure that they are not as biased as older models. Current risk scores that calculate the predicted cost of treating a patient are ”very biased,” she said, because they are based on biased data. Users should be aware, for instance, that using large data groups might make for strong predictions across the board, but they can incorrectly diagnose certain populations more often.

“It’s really important that in every domain, particularly in health, leaders of organizations make an effort to educate their staff – whether it’s the people who work at the front desk who are going to be engaging with a system like Chat GPT to help analyze notes, or doctors who are going to be using clinical algorithms in diagnosis and treatment – that people understand what AI is and what it isn’t,” said Kade Crockford, director of the Technology for Liberty Program at the ACLU of Massachusetts. 

The governor’s sprawling economic development bill highlights the potential of artificial intelligence in a showpiece investment – $100 million for an applied AI Hub.

In implementing artificial intelligence in health spaces, Ghassemi recommended the health system take cues from fields with more safe technological integration. The aviation field requires rigorous training for pilots using automation tools, for instance. 

“If we want to move forward with ethical AI in health,” Ghassemi said, “it’s important that we consider sources of bias in data, we do evaluations more comprehensively, and we realize that while not all gaps can be corrected – nothing will be perfect, there is no perfect model that exists, there’s no perfect person that exists – if we’re very careful about how we develop new tools, they can improve health care for all and we can move forward with more equity.”

The post The artificial intelligence frontier hits health care appeared first on CommonWealth Beacon.

By