![An older man in a suit speaks at a desk with several papers. A woman sits beside him, listening. There are drinks and a tablet on the table.](https://vtdigger.org/wp-content/uploads/2025/02/AI-bill-2-20250204-1024x681.jpg)
![An older man in a suit speaks at a desk with several papers. A woman sits beside him, listening. There are drinks and a tablet on the table.](https://vtdigger.org/wp-content/uploads/2025/02/AI-bill-2-20250204-1200x798.jpg)
MONTPELIER — Vermont could join close to two dozen other states this year in regulating how certain artificially-generated content, often called “deepfakes,” can be used online in the leadup to elections. But there are key questions for lawmakers over whether, or to what extent, the proposal could hinder people’s rights to free speech.
A new bill sponsored by a tripartisan group of state senators, S.23, would require some people who publish “synthetic media” that makes it appear as though someone did or said something, even when they really didn’t, to also publish a disclosure that the media is fake. To be subject to the bill, a person would either have to know, or be in a position where they should have known, the media was misleading — and, critically, be planning to publish it within 90 days of a local, state or federal election.
The bill would not actually prevent people from publishing fake content, no matter how convincing it is. But S.23 calls for fines, ranging from $1,000 for a first offense to up to $15,000 for repeat offenses, for failing to publish a disclosure when it’s required.
Perhaps the most notable example of a “deepfake” effort to influence a recent U.S. campaign came last year in New Hampshire, when a man made fake calls to thousands of people ahead of that state’s primary election using an artificially-generated voice to sound as if then-President Joe Biden was discouraging Democrats from voting.
![People in a room watching a presentation on a screen about a "deepfake" robocaller imitating President Biden's voice, advising people to disregard the message.](https://vtdigger.org/wp-content/uploads/2025/02/AI-bill-1-20250204-1200x798.jpg)
Addison County Democratic Sen. Ruth Hardy, the lead sponsor of S.23, said she wasn’t aware of any similar situations in Vermont — but pointed to the fake Biden call in New Hampshire as a reason she introduced the bill.
“That was just, really, egregious,” Hardy said in an interview Wednesday. The bill, she added, is “an important protection to make sure that our elections are honest and fair — and that there’s some oversight of this emerging technology.”
Hardy noted, too, that Vermont would be playing catch-up. Twenty-one other states had, as of Wednesday, passed similar regulations on the use of deepfakes in electioneering, according to the group Public Citizen, a national consumer advocacy nonprofit.
Ilana Beller, a lobbyist for Public Citizen advocating for S.23, said she was also aware of recent cases where deepfakes were used in campaigns for local office in parts of the country. Most of the other states’ laws, like the Vermont proposal, have a disclosure requirement for fake campaign content, she said — rather than a flat-out ban — in an effort to limit the laws’ potential impacts on content creators’ freedom of speech.
Hardy had the same goal in avoiding a blanket ban in her bill, the senator said.
Beller told the Senate Government Operations Committee on Tuesday that there is at least one case of a law regulating deepfakes that is being stymied by a legal challenge. A federal judge in October blocked California’s enforcement of that state’s recent law banning certain deepfake videos in the leadup to elections, siding with a conservative commentator who had challenged the legislation under the First Amendment.
While Vermont’s proposal does not include a ban, Rik Sehgal — an attorney with Vermont’s Office of Legislative Counsel — cautioned the committee that the judge also found a disclosure requirement in California’s law to be overly “burdensome.”
Sehgal told the committee that political candidates are generally allowed to say what they want in advertisements, even if it’s not true. A key exception, he noted, is when those statements cause harm to someone else, which can constitute defamation.
“It seems like it would be more effective if it was just a ban,” said Sen. Brian Collamore, R-Rutland, the government operations panel chair, during a committee hearing on the bill last week. “But that might be rising to the level of a further constitutional irritant.”
At least one national free speech advocacy group — the Foundation for Individual Rights and Expression — has filed testimony with the committee opposing the bill on grounds that it limits video creators’ rights.
“The government bears an especially high burden to prove by more than speculation that regulation of political speech is necessary,” wrote John Coleman, an attorney for the organization, in a memo, but, “the evidence does not presently demonstrate that ‘deepfakes’ have created an actual problem that would justify such heavy-handed regulation.”
Hardy’s bill includes an exemption from the disclosure requirement for media that is “satire or parody.” It also gives news-gathering organizations somewhat more leeway when publishing deceptive content, and exempts TV broadcasters from having to include a disclosure on campaign ads that they’re required to air by law.
The latter is important because of federal telecommunications rules guaranteeing candidates equal airtime and limiting broadcasters’ ability to alter political ads, even if they contain falsehoods, said Wendy Mays, executive director of the Vermont Association of Broadcasters, during Tuesday’s hearing.
The Government Operations Committee is slated to hear more testimony on the bill later this week.
Read the story on VTDigger here: Senate bill would regulate use of ‘deepfakes’ in leadup to Vermont elections.