Android

An anti-deepfake law might have been generated by AI


Every so often, the tech industry leaves us with funny news stories because of how surreal they are. Currently, artificial intelligence-based developments are quite prevalent in our daily lives. So, it was only a matter of time before something related rose. A Minnesota law that attacks AI-generated realistic content—also known as “deepfakes”—might have been written by an AI.

An AI could have written Minnesota’s “anti-deepfake law”

The law in question is titled “Use of Deep Fake Technology to Influence An Election.” Ironically, it is being challenged for showing traces of generative AI in its content. The appeal does not mention drafting paragraphs with AI. However, there are citations to studies that do not actually exist.

It’s possible that the studies mentioned are the result of the AI platform’s hallucinations. In artificial intelligence, “hallucinations” are segments of incorrect information or directly “invented” by the service. AI chatbot developers work hard to reduce these to a minimum in order to deliver the most accurate output possible. However, a small percentage of inaccurate responses still occur due to hallucinations.

Citations to non-existent sources could be the product of AI hallucinations

As reported by The Minnesota Reformer, the judge in the case, General Keith Ellison, asked Jeff Hancock, founding director of the Stanford Social Media Lab, for a submission to the law. However, it soon became apparent that non-existent studies were present among the citations. The document cited studies such as “The Influence of Deepfake Videos on Political Attitudes and Behavior” from the Journal of Information Technology & Politics. There is no record of such a study.

Another citation, “Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance,” also does not exist. “The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT,” argue lawyers for Mary Franson—representative of the state of Minnesota—and Mr. Reagan, a conservative YouTuber.

Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question, especially when much of the commentary contains no methodology or analytic logic whatsoever,” the appeal to the law says.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.