When AI Goes Wrong: AI Hallucinations Still Costing Firms Money and Credibility

When AI Goes Wrong AI Hallucinations Still Costing Firms Money and Credibility-2

In mid-2025, Australia’s Department of Employment and Workplace Relations paid Deloitte roughly AU$440,000 (€265,000) for a detailed independent review of its welfare compliance system. The resulting 237-page report looked professional — until University of Sydney researcher Dr Christopher Rudge examined the footnotes. It contained around 20 AI hallucinations: fabricated academic citations, non-existent studies, and a completely invented quote from a Federal Court judgment (including a misspelled judge’s name). Deloitte admitted using Azure OpenAI’s GPT-4o to help “fill gaps” and quietly issued a corrected version along with a partial refund.

This wasn’t an isolated mishap. As generative AI embeds deeper into professional services, hallucinations continue to surface in high-value, high-risk contexts, often with real financial and legal consequences.

Recent Wave of Legal and Consulting Failures (Early 2026)

Earlier this month Platinumids reported that lawyers and pro se litigants have submitted 1,227 fabricated citations to courts worldwide after trusting unverified AI output. The majority occurred in the U.S. (811 cases), followed by Canada (135 cases). These are only the cases that were caught.

Importantly, these are the cases that were found out. Not necessarily all of them.

Just weeks ago in April 2026, elite Wall Street firm Sullivan & Cromwell had to apologise to a federal bankruptcy judge in Manhattan after submitting a motion riddled with AI-generated fake case citations. Opposing counsel spotted the errors; the firm acknowledged the hallucinations, corrected the filing, and outlined remedial steps.

This echoes a broader trend: courts have documented hundreds of AI hallucination cases in filings, with the pace accelerating sharply in 2026. US appeals courts have sanctioned lawyers (e.g., $2,500 fines and higher) for unverified AI use, and judges are increasingly issuing warnings, and penalties, for failing to verify outputs.

Deloitte itself faced a similar issue in Canada shortly after the Australian case. A provincial government health-care report costing nearly $1.6 million reportedly contained non-existent citations and other inaccuracies attributed to AI.

Why This Persists in 2026

Even as models improve on narrow, grounded tasks, open-ended professional work like drafting reports, legal briefs, compliance documents, remains vulnerable. Hallucinations thrive when AI is asked to generate supporting evidence, citations, or analysis without strong retrieval grounding and mandatory human verification.

For European readers navigating the EU AI Act, these examples are timely. High-risk uses in public services, justice, and employment (exactly the domains hit in Australia and the legal cases) demand transparency, risk assessments, and human oversight. Hidden or casual reliance on generative AI for “productivity” can quickly turn into regulatory exposure, client loss, or sanctions.

Practical Takeaways

  • Verification loops are non-negotiable. Treat AI as a first draft generator, never the final authority. Especially for citations, quotes, or policy analysis.
  • Disclosure and procurement rules matter. Require vendors to declare AI use and provide hallucination-mitigation evidence (RAG, source linking, multi-step review).
  • The competitive edge shifts. Firms that use AI most responsibly, with clear governance, audit trails, and accountability, will win trust in regulated markets. Aggressive, unchecked deployment risks repeating these headlines.

As Europe advances trustworthy AI, cases like these remind us that the real differentiator isn’t adoption speed, but reliability and accountability. The technology is powerful; the processes around it still determine whether it delivers value or expensive embarrassment.

Author: Andy Samu

See Also:

Biggest AI Surveillance Scandals Threatening Europe’s Privacy in 2026

What is #CatalanGate? AI Surveillance Haunts Spain in 2026

Share this article

Latest news

Subscribe to our newsletter!