Wed. Dec 4th, 2024

Getty Images

A Stanford misinformation expert has admitted he used artificial intelligence to draft a court document that contained multiple fake citations about AI.

Stanford’s Jeff Hancock submitted the document as an expert declaration in a case involving a new Minnesota law that makes it illegal to use AI to mislead voters prior to an election. Lawyers from the Hamilton Lincoln Law Institute and the Upper Midwest Law Center, which are challenging the law on First Amendment grounds, noticed the fake citations several weeks ago and petitioned the judge to throw out the declaration.

Hancock billed the state of Minnesota $600 an hour for his services. The Attorney General’s Office said in a new filing that “Professor Hancock believes that the AI-hallucinated citations likely occurred when he was using ChatGPT-4o to assist with the drafting of the declaration,” and that he “did not intend to mislead the Court or counsel by including the AI-hallucinated citations in his declaration.”

The AG’s office also writes that it was not aware of the fake citations until the opposing lawyers filed their motion. The office has asked the judge to allow Hancock to re-submit his declaration with corrected citations.

In a separate filing, Hancock argues that using AI to draft documents is a widespread practice. He notes that generative AI tools are also being incorporated into programs like Microsoft Word and Gmail, and says that ChatGPT is “web-based and widely used by academics and students as a research and drafting tool.”

Earlier this year a New York court handling wills and estates ruled that lawyers have “an affirmative duty to disclose the use of artificial intelligence” in expert opinions and tossed out an expert’s declaration upon learning that he had used Microsoft’s Copilot to check the math in it.

In other cases, lawyers have been sanctioned by judges for submitting AI-generated briefings containing false citations.

Hancock says he used the software that powers ChatGPT, called GPT-4o, to survey the academic literature on deep fakes and also to draft much of the substance of his declaration. He describes how he prompted the program to generate paragraphs detailing various arguments about AI, and says the program likely misinterpreted notes he left for himself to add citations later.

“I did not mean for GPT-4o to insert a citation,” he writes, “but in the cut and paste from MS Word to GPT-4o, GPT-4o must have interpreted my note to myself as a command.”

Hancock is a leading national expert on misinformation and technology. In 2012 he gave a widely-viewed TED talk entitled “The future of lying,” and since the release of ChatGPT in 2022 he has published at least five papers on AI and communication, including “Working with AI to persuade” and “Generative AI are more truth-biased than humans.”

Hancock, who has served as an expert witness in at least a dozen other court cases, did not respond to questions about whether he used AI in any of those cases, the number of hours he has billed the AG’s office so far, or whether the AG’s office knew beforehand that he would be using AI to compose his declaration.

A representative for the AG’s office said they would not be commenting beyond the court filings.

Frank Bednarz, a lawyer with the Hamilton Lincoln Law Institute, said that “Ellison’s decision not to retract a report they’ve acknowledged contains fabrications seems problematic given the professional ethical obligation attorneys have to the court.”

Minnesota Reformer is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Minnesota Reformer maintains editorial independence. Contact Editor J. Patrick Coolican for questions: info@minnesotareformer.com. Follow Minnesota Reformer on Facebook and X.

By