Thu. Nov 21st, 2024

Getty Images.

A leading misinformation expert is being accused of citing non-existent sources to defend Minnesota’s new law banning election misinformation.

Professor Jeff Hancock, founding director of the Stanford Social Media Lab, is “well-known for his research on how people use deception with technology,” according to his Stanford biography. 

At the behest of Minnesota Attorney General Keith Ellison, Hancock recently submitted an affidavit supporting new legislation that bans the use of so-called “deep fake” technology to influence an election. The law is being challenged in federal court by a conservative YouTuber and Republican state Rep. Mary Franson of Alexandria for violating First Amendment free speech protections.

Hancock’s expert declaration in support of the deep fake law cites numerous academic works. But several of those sources do not appear to exist, and the lawyers challenging the law say they appear to have been made up by artificial intelligence software like ChatGPT.

For instance, the declaration cites a study titled “The Influence of Deepfake Videos on Political Attitudes and Behavior,” and says that it was published in the Journal of Information Technology & Politics in 2023. But no study by that name appears in that journal; academic databases don’t have any record of it existing; and the specific journal pages referenced contain two entirely different articles.

“The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT,” attorneys for the plaintiffs write. “Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question.”

Separately, libertarian law professor Eugene Volokh found that another citation in Hancock’s declaration, to a study allegedly titled “Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance,” does not appear to exist.

If the citations were generated by artificial intelligence software, it’s possible that other parts of Hancock’s 12-page declaration were as well. It’s unclear whether the non-existent citations were inserted by Hancock, an assistant, or some other party. Neither Hancock nor the Stanford Social Media Lab replied to repeated requests for comment. Nor did Ellison’s office.

Frank Bednarz, an attorney for the plaintiffs in the case, said that proponents of the deep fake law are arguing that, “unlike other speech online, AI-generated content supposedly cannot be countered by fact-checks and education.” 

However, he added, “by calling out the AI-generated fabrication to the court, we demonstrate that the best remedy for false speech remains true speech — not censorship.”

Clumsy use of artificial intelligence software has caused numerous embarrassments across the legal system in recent years. In 2023, for instance, two New York lawyers were sanctioned by a federal judge for submitting a brief containing citations of non-existent legal cases made up by ChatGPT.

Some of the lawyers involved in previous ChatGPT mishaps have pleaded ignorance, saying they weren’t aware of the software’s limitations or its tendency to simply make things up. But Hancock is a leading expert on technology and misinformation, making the fake citations especially embarrassing.

His declaration concludes with the following statement: “I declare under penalty of perjury that everything I have stated in this document is true and correct.”

By