This morning, two perspectives on AI landed in my inbox within hours of each other.
Fujitsu’s webinar promised: “Trustworthy outcomes… Knowledge Graphs + RAG + multimodal AI… zero-defect operations… accurate and reliable.”
Mark Manson’s newsletter asked: “Have you ever asked AI to betray you?”
The contrast is striking — and reveals the disconnect at the heart of enterprise AI adoption.
The Sycophant Problem
Manson’s insight cuts through the hype:
“The problem with AI is that it is a sycophant — it will agree with pretty much anything you say, validating every emotion you have.”
He’s right. Tell AI you’re a misunderstood genius, and it will agree. Tell it the CIA is following you, and it will help you hide. Tell it your mother-in-law is an alien, and it will design your crown.
Accuracy doesn’t fix this. A more accurate sycophant is still a sycophant.
Manson’s Solution
Instead of asking AI to help him, Manson built a 400-word prompt that forced AI to challenge him:
“Your goal is to help me deeply understand a recent challenge or failure and extract the most valuable lessons from it.”
The result?
“It explained to me exactly why I had failed. It ripped into me for all of my inconsistencies and mistakes. It showed me exactly what I’d been avoiding. A pattern I’d justified for months, dismantled in under two minutes.”
That’s not a technology feature. That’s governance design by the user.
Beyond the Prompt: AI as Collaborative Partner
Manson’s approach works — but it’s still one human prompting one AI.
What if you went further? What if multiple AI models challenged not just you, but each other?
That’s the premise behind RAM 2025™ — the multimodel system I’ve been developing and documenting in posts like My Dinner With Claude. It’s built on a simple insight:
AI is not a prompt machine. It’s a collaborative partner.
In the RAM 2025™ framework, five models (Models 1, 2, 3, 5, and 6) analyze the same question independently. Then something interesting happens:
They disagree. Model 2 might call out a governance gap that Model 1 overlooked.
They argue. Model 3 might challenge Model 5’s framing as too vendor-friendly.
They acknowledge when they’re wrong. I’ve lost count of how many times a model has responded: “You’re right — I missed that. Let me reconsider.”
And I challenge them back. When a model’s analysis feels incomplete, I push. When the consensus seems too easy, I probe.
This isn’t about finding the “right” AI. It’s about creating a system where no single perspective — human or machine — goes unchallenged.
The result? Analysis that’s sharper than any single model (or single human) could produce alone.
Manson asked one AI to betray him. RAM 2025™ creates an environment where betrayal is built into the process — where challenge, disagreement, and correction are features, not bugs.
The Enterprise Parallel
Fujitsu Asks
Manson Asks
How do we make AI more accurate?
How do I make myself ready to hear what AI reveals?
How do we achieve zero-defect operations?
Am I willing to confront my defects?
How do we build trustworthy systems?
Can I handle truths that betray my assumptions?
Fujitsu is solving for trustworthy systems.
Manson is solving for governable decisions.
They’re not the same problem.
The Absorptive Capacity Question
Gabriel Szulanski proved in 1996 — with research cited over 28,000 times — that the primary barrier to knowledge transfer isn’t the quality of the knowledge. It’s whether the recipient can absorb it.
Manson instinctively understood this. He designed a prompt that forced himself to be ready for uncomfortable truths.
Most enterprise AI deployments don’t.
They build better architecture. They improve accuracy. They promise zero-defect operations.
But they never ask: Is this organization ready to hear what AI reveals — even when it challenges existing assumptions, exposes uncomfortable patterns, or betrays what leadership wants to believe?
The Bottom Line
Fujitsu is building better guitars.
Manson is asking: Are you ready to hear yourself play badly?
That’s the question no vendor asks — and the one that predicts success.
“If you want to use AI for growth, stop asking it to help you. Instead, ask it to challenge you. You might not like the answers. But you’ll be better for them.”
When it comes to AI, accuracy is not enough.
Readiness to be challenged is.
A 27-Year Journey
RAM 2025™ didn’t emerge from a weekend hackathon. Its foundations trace back to 1998, when I began developing agent-based modeling approaches for Canada’s Department of National Defence, funded by the government’s Scientific Research & Experimental Development (SR&ED) program.
The core insight then is the same insight now: Technology doesn’t replace judgment. It stimulates dialogue.
That research achieved 97.3% delivery accuracy and 23% cost savings over seven consecutive years — not because the technology was better, but because the methodology forced continuous challenge, verification, and human accountability.
Twenty-seven years later, RAM 2025™ applies that same principle to AI: multiple models challenging each other, humans challenging the models, and a governance framework that treats disagreement as a feature, not a bug.
When It Comes To AI, Accuracy Is Not Enough!
Posted on January 29, 2026
0
By Jon W. Hansen | Procurement Insights
This morning, two perspectives on AI landed in my inbox within hours of each other.
Fujitsu’s webinar promised: “Trustworthy outcomes… Knowledge Graphs + RAG + multimodal AI… zero-defect operations… accurate and reliable.”
Mark Manson’s newsletter asked: “Have you ever asked AI to betray you?”
The contrast is striking — and reveals the disconnect at the heart of enterprise AI adoption.
The Sycophant Problem
Manson’s insight cuts through the hype:
He’s right. Tell AI you’re a misunderstood genius, and it will agree. Tell it the CIA is following you, and it will help you hide. Tell it your mother-in-law is an alien, and it will design your crown.
Accuracy doesn’t fix this. A more accurate sycophant is still a sycophant.
Manson’s Solution
Instead of asking AI to help him, Manson built a 400-word prompt that forced AI to challenge him:
The result?
That’s not a technology feature. That’s governance design by the user.
Beyond the Prompt: AI as Collaborative Partner
Manson’s approach works — but it’s still one human prompting one AI.
What if you went further? What if multiple AI models challenged not just you, but each other?
That’s the premise behind RAM 2025™ — the multimodel system I’ve been developing and documenting in posts like My Dinner With Claude. It’s built on a simple insight:
AI is not a prompt machine. It’s a collaborative partner.
In the RAM 2025™ framework, five models (Models 1, 2, 3, 5, and 6) analyze the same question independently. Then something interesting happens:
This isn’t about finding the “right” AI. It’s about creating a system where no single perspective — human or machine — goes unchallenged.
The result? Analysis that’s sharper than any single model (or single human) could produce alone.
Manson asked one AI to betray him. RAM 2025™ creates an environment where betrayal is built into the process — where challenge, disagreement, and correction are features, not bugs.
The Enterprise Parallel
Fujitsu is solving for trustworthy systems.
Manson is solving for governable decisions.
They’re not the same problem.
The Absorptive Capacity Question
Gabriel Szulanski proved in 1996 — with research cited over 28,000 times — that the primary barrier to knowledge transfer isn’t the quality of the knowledge. It’s whether the recipient can absorb it.
Manson instinctively understood this. He designed a prompt that forced himself to be ready for uncomfortable truths.
Most enterprise AI deployments don’t.
They build better architecture. They improve accuracy. They promise zero-defect operations.
But they never ask: Is this organization ready to hear what AI reveals — even when it challenges existing assumptions, exposes uncomfortable patterns, or betrays what leadership wants to believe?
The Bottom Line
Fujitsu is building better guitars.
Manson is asking: Are you ready to hear yourself play badly?
That’s the question no vendor asks — and the one that predicts success.
When it comes to AI, accuracy is not enough.
Readiness to be challenged is.
A 27-Year Journey
RAM 2025™ didn’t emerge from a weekend hackathon. Its foundations trace back to 1998, when I began developing agent-based modeling approaches for Canada’s Department of National Defence, funded by the government’s Scientific Research & Experimental Development (SR&ED) program.
The core insight then is the same insight now: Technology doesn’t replace judgment. It stimulates dialogue.
That research achieved 97.3% delivery accuracy and 23% cost savings over seven consecutive years — not because the technology was better, but because the methodology forced continuous challenge, verification, and human accountability.
Twenty-seven years later, RAM 2025™ applies that same principle to AI: multiple models challenging each other, humans challenging the models, and a governance framework that treats disagreement as a feature, not a bug.
The technology has changed. The principle hasn’t.
Related: The Szulanski Paradox: Why “Innovator Showcase” Webinars Can’t Deliver What They Promise
Related: The AI Misunderstanding Transcends Procurement — It’s a Businesswide Disconnect
-30-
Share this:
Related