Georgia lawmakers are studying the way AI could help doctors treat patients as well as how to head off concerns about things like privacy and bias. Image created by Midjourney AI
Some of your doctor’s intelligence might not be inside his or her brain.
That’s what members of Georgia House and Senate committees on artificial intelligence learned Friday in a hearing that featured expert testimony on topics including AI’s role in hospitals across the state.
Doctors are already taking advantage of state-of-the art technology like AI-enabled sensors that can predict when a high-risk patient is about to take a fall up to 45 seconds in advance and allow nurses to rush in and prevent an injury.
AI will soon help doctors by quickly reviewing CAT scans, wrangling large amounts of data or watching over procedures to make sure no surgical tools get sewn up inside a patient. The new tech is already streamlining unexciting but important tasks like updating patient paperwork, said Dr. Alistair Erskine, Chief Information and Digital Officer at Emory Healthcare and Vice President for Digital Health at Emory University
“We have now, live, 2,000 clinicians that are using something called ambient listening,” he said. “Ambient listening, if you don’t know, is a technology that obviates the need for a clinician to write a clinical note from scratch during every outpatient visit. The doctor walks into the room, asks permission of the patient if it’s OK to record the session, then puts down their iPhone and then has a conversation with the patient, ignoring the computer, the keyboard, the mouse, is able to pay attention to what’s going on in the visit.”
The doctor’s phone captures the entire conversation and converts it into a clinical note, which can be added to the patient’s chart within minutes.
“So there’s no bias in terms of recalling what happened in that visit,” Erskine said. “If you’re seeing 60 patients a day, which happens with some of our urologists and orthopedic doctors, you can imagine the cognitive load associated with trying to remember who’s doing what, and this does this automatically.”
But big changes can bring big problems, and lawmakers asked about issues that they may need to address in coming sessions.
Acworth Republican Sen. Ed Setzler asked about what would happen to people who didn’t want AI to be part of their treatment for privacy reasons.
“As health care progresses to a place where the staffing levels, the assumption is, I mean, the praxis of medicine in a hospital setting becomes dependent on having these kind of AI models watching somebody in a room,” he said. “Are people going to really, truly have the ability to say, ‘I don’t want that. I don’t want to be monitored. I don’t want to have this looking at me. I don’t want to have an audio recording of my delicate conversation with my doctor.’”
Erskine said current law protecting patient privacy still applies to medical AI, but European countries and states including California have policies stating that people have the right to have their records erased or not to have records made at all.
“There’s an opportunity in policy to do that now, the caveat is that if somebody has a life critical condition and they turn off all the tools that are designed to be able to help manage that critical condition, they may take a hit in terms of what’s recommended, the doctor may not know something which is life-saving now,” he said. “That’s their choice.”
Atlanta Democratic Rep. Dar’shun Kendrick asked about cultural or ethnic bias arising from AI algorithms.
That’s a problem Emory health providers have already encountered, Erskine said.
“I’ll give you an example relative to kidney care, there’s something called the GFR, which is the glomerular filtration rate,” he said. “There were two different numbers given dependent upon whether you were Black or whether you were white, and and there’s no physiologic reason for that to occur. And so what we found is when we get some of the algorithms from our vendors that have those kinds of elements in them, we can actually unpack them and we can actually remove some parameters and then repackage that AI and use that instead of the one that’s actually offered, so that’s called deracializing the algorithms.”
Erskine said health providers can analyze patient outcomes and look for opportunities to deracialize the data, but he acknowledged it is not perfect.
The House and Senate AI committees have been meeting since this year’s legislative session came to an end, discussing the potential impact of AI on topics from traffic and infrastructure to business and education, and what policies the state could put into place to take advantage of the new technology.
Next month, the committees are scheduled to present recommendations for legislation to pursue once the General Assembly gavels in again in January.
This year, a bill backed by Republican leadership to restrict AI-generated deep fakes from being used in political advertisements ahead of an election passed the House but stalled in the Senate after some lawmakers expressed free speech concerns. Similar legislation could make a comeback this year.
Senate AI Committee Chairman John Albers, a Roswell Republican who also chairs the Senate Public Safety Committee, has expressed interest in finding ways for AI to help first responders, including by speeding up 911 dispatch and helping solve cyber crimes. Lawmakers have also called for laws to prevent criminals from using AI deepfakes for scams or making illicit deepfakes of innocent people, especially minors.
Speaking at the end of Friday’s meeting, Albers predicted a busy session on the AI front.
“Knowing that artificial intelligence is changing literally on a daily basis, this is not the end, it is not even the beginning of the end, as Winston Churchill said. We have a lot of work ahead of us,” he said.
YOU MAKE OUR WORK POSSIBLE.