Wed. Mar 19th, 2025
A technology conference booth features a large digital screen displaying the words "AI is everywhere" alongside a cartoon character resembling Albert Einstein. The booth is illuminated in blue lighting, with signage encouraging attendees to assess their AI readiness. A person wearing a staff shirt stands in the shadows on the left, while another attendee in a suit walks past, holding a cup and a smartphone. The scene is partially obscured by foreground elements, adding depth to the composition.

In summary

A group of academic luminaries formed by Gov. Gavin Newsom last year is earning praise for its AI policy recommendations.

A group of artificial intelligence luminaries convened by Gov. Gavin Newsom issued what is expected to be an influential set of recommendations Tuesday, pushing state lawmakers to bring greater transparency into how AI models are made and operated.

The proposed steps will lead to advances in both innovation and public trust, the academic experts write in their draft report, helping the state balance experimentation with guardrails protecting against AI harms.

Newsom formed the group last fall as he vetoed a prominent bill to regulate AI, arguing the measure would curtail innovation. 

Known officially as the Joint California Policy Working Group on AI Frontier Models, the group suggested that lawmakers: 

  • Encourage companies building advanced AI models to disclose risks and vulnerabilities to developers making their own versions of the models.
  • Evaluate advanced AI models using an independent, outside party.
  • Consider enacting rules to protect whistleblowers.
  • Evaluate the possible need for a system to inform the government when private companies develop AI with dangerous capabilities.

Scott Wiener, the Democratic senator behind the bill Newsom vetoed, praised the report and said it may influence a scaled-down version of his measure known as Senate Bill 53.  

“The recommendations in this report strike a thoughtful balance between the need for safeguards and the need to support innovation,” he wrote in a statement shared with CalMatters. “My office is considering which recommendations could be incorporated into SB 53, and I invite all relevant stakeholders to engage with us in that process.”

 The draft report does not argue for or against any particular piece of legislation currently under consideration but may hold heavy influence over the 30 bills to regulate artificial intelligence now before the Legislature. Those measures include roughly half a dozen bills to address how AI raises costs of goods and others aiming to mitigate how the technology affects the environment, public health, and rising energy rates. Another bill would require businesses to disclose when AI is used to make important decisions about people’s lives. Business groups lobbied heavily against such regulations last session.

The draft report highlighted AI rules on the books in places like Brazil, China, and the European Union. It stated that California’s rules will play a unique and powerful role due to its position as the home of many major AI companies and research institutions. 

“Without proper safeguards…powerful AI could induce severe and, in some cases, potentially irreversible harms,”  the draft report reads. “Just as California’s technology leads innovation, its governance can also set a trailblazing example with worldwide impact.”

Members of the public have until April 8 to comment and share feedback before the recommendations are expected to be finalized this summer.

State Sen. Scott Wiener speaks at a press conference in San Francisco on Jan. 21, 2022. Photo by Karl Mondon, Bay Area News Group
State Sen. Scott Wiener speaks at a press conference in San Francisco on Jan. 21, 2022. Photo by Karl Mondon, Bay Area News Group

Authors of the report include Mariano-Florentino Cuéllar, president of the Carnegie Endowment for International Peace; Jennifer Tour Chayes, dean of the UC Berkeley College of Computing, Data Science, and Society, and Fei-Fei Li, former chief AI scientist at Google Cloud and creator of a pioneering AI project known as ImageNet. Li is commonly referred to as a godmother of AI and her perspectives have been sought after by members of Congress and the Biden administration.

The group focused on “frontier models,” the most cutting-edge forms of artificial intelligence, such as OpenAI’s ChatGPT, which dates from late 2022, and R1, a newer model from Chinese company DeepSeek. California-based companies including Anthropic, Google, and xAI are also developing advanced general-purpose AI systems. 

Frontier models promised improved efficiency, as when they help teachers grade writing assignments, but also carry risks. They can be used by scammers, enable the spread of disinformation and perpetuate bias. Hype and fear of frontier models led members of the public to consider whether AI may play a role in human extinction.

The draft report is one of a number of documents produced by the state of California in recent years, including one about the benefits and risks of generative AI in late 2023 and another about the impact of generative AI on vulnerable communities in late 2024. Neither report was mentioned by the working group.

A representative of tech business interests praised the report. Megan Stokes, state policy director for the Computer & Communications Industry Association, said the working group took great care to survey existing laws that protect Californians from potential AI harms and to review existing regulatory authorities, helping to ensure that new regulations are not duplicative. Stokes’ group opposes a bill that would require developers to disclose use of a creator’s copyrighted material before training an AI model. Copyright infringement is a current risk acknowledged by the working group. 

Jonathan Mehta Stein, chair of advocacy group California Initiative for Technology and Democracy, said that while the working group’s draft report contains policy recommendations, it primarily urges that California wait and see — leaving lawmakers with little direction on best policies to pursue. That conclusion risks stunting the momentum of current legislation aimed at tackling known, documented harms, he added. His organization, which cosponsored three bills last year to protect voters from AI, wants the working group to add more actionable legislative recommendations into its final report. 

“If California wants to lead on AI governance and on building a digital democracy that works for everyone, it must act and act now,” Stein said in a written statement “California sitting on our hands because industry is uncomfortable with regulation does not mean industry will be free of regulation. Regulation is coming. Inaction by California simply means other states will pass regulations and set the terms of AI governance, and California will cede its leadership.”

“There’s something for people on both sides.”

Koji Flynn-Do, co-founder, Secure AI Project

At the rate that the technology is changing, the draft report is right to point out that the window to regulate AI may be closing soon, said Koji Flynn-Do, co-founder of Secure AI Project, a group established in December 2024 that previously supported the Wiener AI bill that Newsom vetoed He said it’s heartening to see the report focus on safety and security protocols and testing to mitigate risks alongside a letter by employees of frontier AI companies calling for whistleblower protections.

“People will say that it goes too far, some people say that it doesn’t go far enough, and I think there’s something for people on both sides,” he said.

The draft report “seems like progress to me,” said Daniel Kokotajlo, who also endorsed the AI safety bill proposed by Wiener last year. He’s a signatory of a letter written by current and former employees of companies building frontier models that calls for whistleblower protections and an adverse event reporting system. The righttowarn.ai letter is cited by the working group in the draft report.

“I want to see more specific proposals, like these companies should do this and these regulations should be passed, but it’s still progress to be talking about these things at all.”