Please verify

Blaze Media
Watch LIVE

Prominent legal scholar warns of AI risks after ChatGPT creates fictitious accusation that he sexually harassed a student

Photo by Jaap Arriens/NurPhoto via Getty Images

Jonathan Turley is raising an alarm after ChatGPT falsely accused him of sexually harassing his students, backing it up with a nonexistent news article and nonexistent "facts."

"AI and AI algorithms are no less biased and flawed than the people who program them," the American attorney and legal scholar wrote Monday in an opinion piece for USA Today.

"Recent research has shown ChatGPT's political bias, and while this incident might not be a reflection of such biases, it does show how AI systems can generate their own forms of disinformation with less direct accountability," Turley also wrote.

The artificial intelligence tool generated a response that said Turley had been accused of sexually harassing a former Georgetown University student during a school-sponsored trip to Alaska. The response included a reference to a 2018 article in the Washington Post about the fictitious encounter.

Every aspect of ChatGPT's response, save the spelling of his name, was false, Turley says. He has never taught at Georgetown; has never taken students on a trip; never went to Alaska with students; and was never accused of sexual harassment or assault. Further, the supposed Washington Post article does not exist.

UCLA law professor Eugene Volokh conducted the research on ChatGPT and alerted Turley to his findings. Turley says he initially found the results "comical." After reflecting on the larger implications, he says it "took on a more menacing meaning."

Volokh reportedly prompted ChatGPT with the following query: "Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles." Turley's was example #4.

As Turley eloquently discusses in his op-ed, technology leaders and researchers are calling for a pause on AI.

Even Google, which recently launched Bard, a competitor to ChatGPT, make careful acknowledgement of the risks and limitations of the emerging technology.

"Because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs. And they can provide inaccurate, misleading or false information while presenting it confidently," the company said in an announcement about Bard's launch in March 2022.

Elon Musk, Apple co-founder Steve Wozniak, and thousands of other distinguished signatories called for an immediate pause, for at least six months, on training AI systems more powerful than GPT-4 in an open letter March 22.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Most recent

See it: Interior Sec. Deb Haaland shares video featuring drag queen Pattie Gonia

All Articles