Real Life Examples of Risks Associated With AI Hallucinations

The phenomenon of AI hallucinations (also known as confabulations, delusions, and bullshitting) are not going away anytime soon.

Cited in a May 2025 Forbes story, releases of recent AI tools from OpenAI and DeepSeek are experiencing higher rates of hallucinations, with OpenAI’s o3 and o4-mini models having hallucinated 30-50% of the time, according to OpenAI testing. In addition, according to AI research firm Vectara, DeepSeek’s R1 reasoning model “hallucinates much more than DeepSeek’s traditional AI models.”

From the beginning, there have been concerns about LLM chatbots and deep research offerings producing hallucinations. Concerns, rightly so, have focused on the risks associated in areas such as legal, medical, and business, where decisions based on bad information have dire consequences. Here are real world examples that drive home the reality of the damages caused by unchecked hallucinations.

  • Joe Pierre, MD, in a terrific article on AI hallucinations in medicine and mental health, highlights these examples: “Back in 2023, an AI chatbot offered dieting advice for someone struggling with a restrictive eating disorder. Other AI chatbots purporting to offer psychotherapy have resulted in patient suicides. And just last week [as of June 10, 2025], it was reported that a user who was struggling with addiction and using a ‘therapy chatbot’ for support was told by the AI app to take a “small hit of methamphetamine to get through [the] week.”

  • Damien Charlotin has created a valuable database that tracks legal decisions in cases where “generative AI produced hallucinated content.” It is continuously updated with 154 cases being identified as of June 10. For each case, the following information is provided: court/jurisdiction, date, party using AI, AI tool, nature of hallucination, outcome/sanction, monetary penalty, and a summary of the case. The cases are international in scope with most being from the U.S. The nature of hallucination mostly cited is “fabricated citations.” In the Mid Central Operating Engineers Health v. Hoosiervac (May 2025), the Court recommended a sanction of $5,000 per violation (3) but ultimately fined Counsel $6,000. The judge noted: “It is one thing to use AI to assist with initial research…it is an entirely different thing, however, to rely on the output of a generative AI program without verifying the current treatment or validity—or, indeed, the very existence—of the case presented. Confirming a case is good law is a basic, routine matter and something to be expected from a practicing attorney…an individual’s “citation to fake, AI-generated sources…shatters his credibility.”

  • Reported by Ars Technica’s Benj Edwards, a developer using code editor, Cursor, was getting logged out when attempting to use use multiple devices. Sam, an AI customer service agent, cited a new (non existent) policy, which indicated the behavior was expected. Thinking the policy was legit, a strong negative reaction from the Hacker News and Reddit communities resulted in cancellations and negative press. In addition, there was an earlier incident where an Air Canada’s AI agent incorrectly informed a flyer he could apply for bereavement rates retroactively after booking, in which Air Canada later denied his refund request. Again causing mistrust and lingering reputational damage.

Photo by Philipp Katzenberger on Unsplash

Copyright © Copyright 2024 Cottrill Research. Site By Hunter.Marketing