A person using an AI chatbot. An Alabama task force in Montgomery Thursday reviewed preliminary AI policy recommendations, including frameworks, risk management, and AI literacy ahead of a report due in November. (Getty Images)
A state task force formed to regulate the use of artificial intelligence (AI) in state government Thursday reviewed preliminary policy recommendations ahead of a report due in November.
Task force members in four separate working groups outlined possible ways to better utilize AI’s in government, including definitions of AI policy frameworks; improved risk management practices and enhanced AI literacy across state agencies.
“I think there’s a lot of good ideas and frameworks. Obviously, you can’t solve everything through this working group, but there’s good, good strawman templates that we can provide for agencies,” said task force chairman Daniel Urquhart, secretary of the Alabama Office of Information Technology.
GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX
Members of the task force often discussed Generative AI, or GenAI, a type of artificial intelligence that can create new content, such as text, images or audio, according to McKinsey, a management consulting firm.
The policies and governance working group proposed developing a comprehensive set of definitions to standardize AI terminology within government. Mike Owens, who presented on behalf of the group, emphasized the importance of a risk management process designed for AI, proposing an oversight board that would ensure agency AI initiatives align with state regulations and promote compliance.
“Whether it be security systems administration or app development, have a corroboration of those skill sets in the assurance board,” Owens said.
Laura Crowler, also in the policies and governance working group, said they would not be proposing statutory changes but policy recommendations.
The workforce, education and training group discussed efforts to build AI knowledge among state employees. Roger Bowman proposed the development of an introductory AI course developed in partnership with academic institutions and suggested professors who work in the area from the University of Alabama and Auburn University.
“We’ve approached that in two ways: first of all, we have looked at some other introductory AI training courses that other states have put together … the other area where we’re approaching educating the workforce is we’re looking for an option to spotlight some GenAI functionality and publish that somewhere where it can be consumed and users can test it and see how the capabilities work,” Bowman said.
A working group evaluating responsible data management proposed guidelines on data ownership and classifying data for use with AI systems that minimize risks of errors or bias. The group also proposed agencies develop a reference guide to ensure the ethical handling of sensitive or restricted information.
“We want to help make sure that when GenAI systems are used, citizen data, if used with GenAI data, is protected,” said Aaron Wright, who presented for the working group.
Willie Fields, presenting for the responsible and ethical use of AI group, discussed the need for privacy, bias monitoring and accountability in AI use. Fields also said they would not be proposing legislation, instead focusing on policy recommendations. Fields also pointed to accountability, particularly in determining who is responsible for AI-driven decisions.
“Who is responsible for the data that comes from these GenAI systems? Ultimately, it’s the head of the agency, but we need to help that person be prepared to sign off or authorize the system that’s going production,” Fields said.
The task force is expected to finalize their recommendations in their next meeting in late October, with the report to be presented to Gov. Kay Ivey in November.
SUPPORT NEWS YOU TRUST.