AI slop just cost Deloitte a big refund, and if you’re not careful, it’ll cost your organisation a lot more.
Deloitte just admitted it used generative AI to produce a $440,000 government report riddled with fake citations, made-up legal quotes, and basic errors.
Embarrassing, yes. Uncommon, no.
Unfortunately AI slop is everywhere. It lives in every enterprise that rushed into using Gen AI without a plan, a policy or proper training.
Let’s look at what actually went wrong in Deloitte’s approach: – AI was used to fill knowledge gaps – instead of surfacing human expertise. – Outputs were taken at face value – instead of being reviewed and refined by skilled professionals. – It was used as an automation shortcut – not as an augmentation tool.
Here’s what not to do: – Don’t replace your judgment with AI’s output. It’s called artificial for a reason. – Don’t “automate” complex, multi-layered thinking with generic prompts and copy-paste results. – Don’t hand over trust to a tool that doesn’t understand context, nuance or consequence.
I help organisations avoid this exact kind of failure by doing three things that most miss: 1. Build from internal knowledge – AI works best when grounded in what you already know. 2. Use AI to amplify, not replace – it should boost your strengths, not mask your weaknesses. 3. Train your people to become AI super-users – not just prompt monkeys, but power users who question, adapt and verify AI output.
If you’re scaling GenAI in your business, do it right. Use AI to sharpen what you already do best, but don’t fall into the trap of “outsourcing” your core value proposition to OpenAI or similar.
Challenge the norm. Get it right over being right. Document what others won’t. This is how pattern recognition (and AI) actually works → https://bit.ly/3IOnFTC
It isn’t AI’s fault—it’s the misuse of AI.
What Deloitte missed (and what most organizations miss):
They used AI as an answer machine instead of a diagnostic tool.
They asked: “Give me a report.” They should have asked: “What evidence supports this conclusion? Where are the gaps? What am I missing?”
The pattern: Same equation-based thinking that’s produced 70-80% failure rates in procurement tech for 25 years. Deploy technology first, assume capability, skip readiness assessment.
The fix: Three operating principles that work for both AI collaboration and organizational transformation:
Challenge the norm (question AI outputs, don’t accept at face value)
Get it right over being right (iterate based on evidence, not defend first draft)
Document what others won’t (measure outcomes, not activity)
I run a 6 Model/5 Level AI team specifically to avoid the Deloitte mistake—testing conclusions across multiple models, challenging my own assumptions, and documenting the methodology.
Jonas is right about the solution: “Train your people to become AI super-users—not just prompt monkeys.”
But here’s what most training misses: Question quality, not prompt engineering.
Deloitte just proved why the current tech-first approach to ProcureTech AI selection and implementation will continue to fail for most organizations
Posted on October 7, 2025
0
AI slop just cost Deloitte a big refund, and if you’re not careful, it’ll cost your organisation a lot more.
Deloitte just admitted it used generative AI to produce a $440,000 government report riddled with fake citations, made-up legal quotes, and basic errors.
Embarrassing, yes. Uncommon, no.
Unfortunately AI slop is everywhere. It lives in every enterprise that rushed into using Gen AI without a plan, a policy or proper training.
Let’s look at what actually went wrong in Deloitte’s approach:
– AI was used to fill knowledge gaps – instead of surfacing human expertise.
– Outputs were taken at face value – instead of being reviewed and refined by skilled professionals.
– It was used as an automation shortcut – not as an augmentation tool.
Here’s what not to do:
– Don’t replace your judgment with AI’s output. It’s called artificial for a reason.
– Don’t “automate” complex, multi-layered thinking with generic prompts and copy-paste results.
– Don’t hand over trust to a tool that doesn’t understand context, nuance or consequence.
I help organisations avoid this exact kind of failure by doing three things that most miss:
1. Build from internal knowledge – AI works best when grounded in what you already know.
2. Use AI to amplify, not replace – it should boost your strengths, not mask your weaknesses.
3. Train your people to become AI super-users – not just prompt monkeys, but power users who question, adapt and verify AI output.
If you’re scaling GenAI in your business, do it right. Use AI to sharpen what you already do best, but don’t fall into the trap of “outsourcing” your core value proposition to OpenAI or similar.
The above was posted by Jonas Christensen on LinkedIn
Here was my reply:
Challenge the norm. Get it right over being right. Document what others won’t. This is how pattern recognition (and AI) actually works → https://bit.ly/3IOnFTC
It isn’t AI’s fault—it’s the misuse of AI.
What Deloitte missed (and what most organizations miss):
They used AI as an answer machine instead of a diagnostic tool.
They asked: “Give me a report.” They should have asked: “What evidence supports this conclusion? Where are the gaps? What am I missing?”
The pattern: Same equation-based thinking that’s produced 70-80% failure rates in procurement tech for 25 years. Deploy technology first, assume capability, skip readiness assessment.
The fix: Three operating principles that work for both AI collaboration and organizational transformation:
I run a 6 Model/5 Level AI team specifically to avoid the Deloitte mistake—testing conclusions across multiple models, challenging my own assumptions, and documenting the methodology.
Jonas is right about the solution: “Train your people to become AI super-users—not just prompt monkeys.”
But here’s what most training misses: Question quality, not prompt engineering.
Full methodology reveal (including how I assess vendors like Certa and Vertice using this framework) in the following post: Pattern Recognition, Not Prophecy: Why I See What Others Miss (And My Certa/Vertice Assessment)
30
Share this:
Related