In summary
The legislation would have required tech companies to test AI for harm to society. It attracted opposition from numerous members of Congress and major AI companies including Google, Meta, and OpenAI.
California Gov. Gavin Newsom today vetoed the most ambitious — and contentious — bill approved by the Legislature this year to regulate artificial intelligence.
The legislation, Senate Bill 1047, would have required testing of artificial intelligence models to determine whether they are likely to lead to mass death or enable attacks on public infrastructure or severe cyberattacks. It also would have extended protections to whistleblowers and established a public cloud for developing AI that’s beneficial to society. It applied to models that cost more than $100 million to develop or that require more than a certain quantity of computing power to train.
In announcing the veto, Newsom criticized the bill on several counts, including that it risks “curtailing the very innovation that fuels advancement in favor of the public good.” The governor also said it’s a mistake to only regulate the largest AI models, since small ones may be the most dangerous; that the bill regulates large models regardless of the context in which they are deployed, like whether they are used in high-risk environments; and that the measure was written without conducting an empirical analysts of where AI is headed.
“A California-only approach may well be warranted — especially absent federal action by Congress — but it must be based on empirical evidence and science,” he wrote. Newsom added that the National Institute of Standards and Technology is developing guidance about AI risks informed by such “evidence-based approaches.”
The bill’s sponsor, San Francisco Democratic Sen. Scott Wiener, called the veto “a missed opportunity for California to once again lead on innovative tech regulation” and “a setback for everyone who believes in oversight of massive corporations that are making critical decisions” using AI, including decisions that affect public safety.
Supporters of the bill included AI startup Anthropic, the Center for AI Safety, tech equity nonprofit Encode Justice, billionaire Elon Musk, the National Organization for Women, and whistleblowers who worked at companies such as ChatGPT creator OpenAI. Opponents such as Google, Meta and OpenAI argued that the bill would harm the California economy and the AI industry. Numerous members of the California congressional delegation asked Newsom to veto the bill even before the California Legislature voted overwhelmingly to pass it last month.
A supporter of the bill, Teri Olle of advocacy group Economic Security California, said a veto by the governor means “we forfeit the opportunity to lead.” Speaking to CalMatters ahead of the decision, she also said Newsom had an opportunity to be a leader at a really important moment in the story of AI had he signed the legislation.
Alondra Nelson, former director of the White House Office of Science and Technology Policy, who had mixed feelings on the measure, said striking down a bill that requires testing before use has implications for global AI governance given California’s large share of the market.
The California Chamber of Commerce praised the veto, saying Wiener’s bill “would have stifled AI innovation, putting California’s place as the global hub of innovation at tremendous risk.”
Speaking at a generative AI symposium held in May at his request, Newsom had said it’s important to respond to calls by people in the AI industry to regulate the technology — but also warned that he didn’t want to overregulate an important industry for California.
The state is home to a majority of the top 50 AI companies in the world, according to Forbes, and Silicon Valley-based companies receive more AI investment dollars than any other region of the world, according to tech intelligence firm Crunchbase.
People in the AI industry are not of one mind about SB 1047 — mirroring a debate this year about a measure to prohibit weaponization of robots. That bill was co-sponsored by a leading maker of robots, Boston Dynamics. Newsom also vetoed it despite support within the industry, saying police needed an exemption from the legislation, which it did not provide.
Newsom did sign into law roughly a dozen bills regulating AI, including one that outlaws social media notifications to minors during school hours, and others to protect voters from deepfakes and creatives from digital replicas of their likeness without their consent. He also signed bills that require businesses to share information about the data they use to train generative AI models, and to supply users with tools to test whether media was made by a human or AI.
All told, California lawmakers passed more than 20 bills to regulate artificial intelligence this year.
The majority of voters support Senate Bill 1047, according to polling by the Artificial Intelligence Policy Institute. So do a number of powerful groups like the Service Employees International Union, the Latino Community Foundation, and SAG-AFTRA, alongside more than 100 prominent Hollywood stars including Pedro Pascal and Ava DuVernay.
Newsom’s veto of SB1047 places a roadblock in the way of aligning California regulation with the European Union’s AI Act, approved earlier this year and said to be the most comprehensive effort to date to regulate AI. A European Union representative told CalMatters earlier this year that SB 1047 and bills that required watermarks on AI-generated imagery and protected people from automated discrimination account for the majority of the provisions in the AI Act. Those California bills failed to pass the Legislature.
Regardless of the governor’s decision, the debate over Wiener’s bill got a lot of people engaged with AI policy who may not traditionally think of themselves as part of that conversation, said Nelson, who previously helped craft the Blueprint for an AI Bill of Rights for the White House and more recently an AI governance framework for the United Nations.
Nelson found the California bill lacking in a few ways: It doesn’t address civil rights issues raised by AI or the need to protect people from AI in the workplace. But it does require that companies test AI systems before deployment, as the White House’s blueprint for AI rights advocates.
But she said she hopes that conversations and coalitions formed to support SB 1047 will help frame strategies for future AI legislation. San Ramon Democratic Assemblymember Rebecca Bauer-Kahan, who spoke with Nelson to draft a bill to protect Californians from AI-fueled discrimination, said she plans to reintroduce a similar bill next year. She also hopes to see continued work to pass regulations like SB 1047 and alignment between the AI Civil Rights Act introduced in Congress this week and the reintroduction of a state bill, which failed in the Legislature previously, to defend people from automated discrimination in California.
“California has been one the of the few places in the U.S. where we are still demonstrating that we can and are willing to govern technology,” she said. “So even if I don’t agree with everything that’s in the bill, I think it’s really important for democracy that state and federal legislatures stay in the game of governing new and emerging technology.”