Sat. Nov 16th, 2024

After the Utah Legislature created the Artificial Intelligence Policy Act in the 2024 general session, a dedicated team has been put in place to review AI pilot programs. Mental health concerns have taken a big chunk of interest. (Getty Images)

A system that could automatically help patients forgo a waitlist to a therapist’s office may be full of challenges, but it also may be on its way to becoming a reality with the use of artificial intelligence. 

However, before that technology becomes more widespread, Utah is planning on regulating it, watching for bad practices before they can become the new normal.

That’s one of the goals at the recently established Office of Artificial Intelligence Policy. After the Utah Legislature created the Artificial Intelligence Policy Act in the 2024 general session, a dedicated team has been put in place to review AI pilot programs. Mental health concerns have taken a big chunk of interest.

‘Are you human?’: Utah lawmakers want to regulate artificial intelligence

Zach Boyd, the director of the state’s Office of Artificial Intelligence Policy, a first in the country, said Utah is proactively tracking the advancement of the technology while anticipating policy adjustments.  

“We don’t want to repeat the mistakes of the past where we let some technologies like social media become so advanced before we even thought about how we could legislate them adequately,” he said.

The office is monitoring dozens of issues, Boyd said, including a deep dive on AI and mental health. Some areas that are catching the attention of his office are how factors like data privacy, liability law, health care, education, misinformation and more are impacted by AI.

“Mental health is a huge, huge concern for all kinds of reasons that don’t have to do with AI,” Boyd added. “I teach at a university and see lots of people, young people coming up, who are very concerned about their mental health, who are certainly engaging in it in different ways than the generations before them.”

The technology is still at an early stage of development. A chatbot from the National Eating Disorders Association, for example, was removed after it provided advice that could exacerbate eating disorders. 

“This technology tends to be very confident when it’s wrong,” said Jeremy Kendrick, an associate professor with the University of Utah’s Department of Psychiatry. The U. has been undertaking efforts to advance AI in behavioral science, such as attempting to use the technology to predict mental health issues, or to to analyze facial expressions and gestures to get to a diagnosis. 

GET THE MORNING HEADLINES.

Kendrick, both a clinician and an informaticist, said the university is “in a hopeful and cautiously kind of optimistic phase of really reviewing these technologies,” since they may be a little young, unpredictable and sometimes, biased. However, they have the potential to enhance access to health care in a state with a shortage of providers.

“Waitlists to get in to see us are increasingly long. We don’t have enough providers to see all the patients that need to be seen. And so when there are brief therapeutic modalities, or things like that, that can help patients while they are waiting to be seen by a clinician,” Kendrick said. “That’s where these technologies, I think, show a lot of promise.”

He hopes to see a validated product that could help academics and clinicians understand which patients need to be seen sooner, or to help personalize the care a patient will receive, looking at large datasets to find associations that may not be apparent, he said.

Potential AI regulation

Businesses are hypothesizing that AI could become a scalable, lower cost and less intimidating option for those seeking mental health care, Boyd said. It could also be better at certain tasks than humans. But, the specifics are still unclear and no one should worry about having jobs replaced by AI anytime soon, he said, considering the current demand in health care.

What the best companies are currently doing, Boyd said, is narrowing the angles on AI training. Some do things on the preventative side, or on only a few assessment or diagnosis tasks.

The Office of Artificial Intelligence Policy recommended to the Legislature potential actions to take in the near term, including enhanced consumer protections, particularly around data privacy. 

“We certainly have seen people using engagement mechanisms that are not appropriate,” Boyd said. “Or some companies blurring the line where they’re claiming to help with mental health, but really their thoughts are deviating into things that are not really helpful for people.”

As AI takes the helm of decision making, signs of perpetuating historic biases emerge

The office is also developing recommendations for licensed professionals to clarify how they might be allowed to use these technologies in their own practice, and advising on best practices to avoid liability in case the product “is deployed in a manner that is negligent,” aiming to have strong penalties while encouraging innovation.

There’s already an enforcement mechanism through the Utah Division of Professional Licensing, which lives under the same umbrella as Boyd’s office, the state Department of Commerce. There are also laws against medical malpractice that could be effective in these cases. 

“We have thought a lot about basically what would be analogous to (humans). With a human, we would require them to have licensure and training and things like these. We don’t think the state is ready, nor is it wise at this point to think about licensing chatbots,” Boyd said. “In this area, there’s all kinds of technical and practical barriers to this, and frankly, it would interfere with emerging technologies.”  

Best practices

The office also hopes companies find ways in which the technology could fail before deployment. Also that they develop a way to emphasize informed consent, so patients know the risks and benefits of engaging with the technology.  

The programs should also be informed by best clinical practices, including accountability from an ethical compliance third party.

As for the research standpoint, Kendrick said, the University of Utah is working on training models with research data that’s behind paywalls and are “very specific textbooks about a specific condition or medical area of expertise.” That’s what would make a difference from other products, such as ChatGPT and Google Gemini. 

In fact, the university has barred the use of any external models, such as OpenAI, on the chance that protected health information could sneak into the university’s model. 

But, as of now, the technology continues to be expensive to generate and not so reliable if left to its own devices, Kendrick said. 

“It’s going to be the clinician plus the AI interfacing with the patient, as opposed to just the patient talking to a chat bot,” Kendrick said. “Now that I’m biased, because that’s in mental health, right? In mental health, the therapeutic interaction between a provider and a patient is huge, and AI just won’t be able to replace that anytime in the future.”

YOU MAKE OUR WORK POSSIBLE.

By