Stanford Professor Accused of Fabricating Testimony Using AI in Case Against Conservative YouTuber
A Stanford University professor who specializes in misinformation is facing serious allegations of using artificial intelligence (AI) to fabricate testimony in a politically charged legal case involving a conservative YouTuber. This controversial situation raises significant questions about the integrity of expert testimonies in the age of technology.
Background on the Case
Jeff Hancock, a communications professor and the founder of Stanford’s Social Media Lab, provided an expert declaration in a case centered around Minnesota’s recent ban on political deepfakes. The plaintiffs, represented by conservative YouTuber Christopher Kohls, argue that this law undermines free speech rights. Minnesota Attorney General Keith Ellison is supporting the law, relying heavily on Hancock’s testimony in his argument.
Hancock’s Credentials and Research
Hancock is well-regarded for his research on how technology can facilitate deception, ranging from text messages to online reviews. However, the integrity of his recent testimony has come into question. Lawyers for the plaintiff have requested that the Minnesota federal judge dismiss Hancock’s testimony, claiming it references a non-existent study.
Allegations of a Fabricated Study
The plaintiff’s legal team pointed out that Hancock cited a study titled “The Influence of Deepfake Videos on Political Attitudes and Behavior,” allegedly published in the Journal of Information Technology & Politics. While the journal itself is legitimate, the lawyers assert that no such study exists.
In a detailed 36-page memo, they argue, “The Declaration of Prof. Jeff Hancock cites a study that does not exist. No article by that title exists.” This raises concerns that the citation may have been a “hallucination” generated by an AI language model, such as ChatGPT.
Lack of Methodology and Credibility
The plaintiff’s attorneys expressed further skepticism, arguing that many of Hancock’s conclusions lack methodological rigor and appear to be based solely on expert opinion. They noted that Hancock could have referenced legitimate studies but instead opted for a fictional citation, calling into question the overall quality and reliability of his declaration.
Extensive Searches Reveal No Evidence
The legal memorandum emphasizes the extensive searches conducted to locate the cited study, which yielded no results. The lawyers stated, “The title of the alleged article does not appear anywhere on the internet as indexed by Google and Bing.” Even a search through Google Scholar, a database specifically for academic publications, failed to find anything matching Hancock’s description.
The attorneys firmly conclude that if any part of Hancock’s declaration is fabricated, it renders the entire document unreliable for court consideration. They argued, “The declaration of Prof. Hancock should be excluded in its entirety because at least some of it is based on fabricated material likely generated by an AI model.”
Next Steps and Implications
The implications of this case could be significant, not only for the individuals involved but also for the broader discourse on the role of AI in legal proceedings. With allegations of fabrication hanging over Hancock’s testimony, the court may need to investigate the sources of this purported fabrication. Potential further actions could arise depending on the findings.
In light of these developments, Fox News Digital has reached out to Attorney General Ellison, Professor Hancock, and Stanford University for comments regarding this unfolding situation.