The AI Misunderstanding Transcends Procurement — It’s a Businesswide Disconnect

Posted on January 29, 2026

0


By Jon W. Hansen | Procurement Insights

I was drawn to Fujitsu’s webinar title: “Trustworthy AI: Achieving Accuracy and Trust in Critical Decisions.”

Finally, I thought. Someone leading with trust and accuracy rather than speed and automation percentages.

Rather than offering my own assessment, I asked the RAM 2025™ multimodel framework what it thought. Here’s what five independent AI models concluded:


RAM 2025™ Multimodel Analysis

Model 1 — The Architecture Gap:

“They anchor on accuracy and trust for business-critical decisions, which is the right framing. The Knowledge Graph + RAG + multimodal trio is sensible technically. But ‘zero-defect’ as a phrase is risky: in many domains the right goal is bounded risk plus strong detection and containment, not literal zero defects. The interesting question is which dimensions of trust — robustness, alignment, governance, auditability, liability — they actually address and how those are measured.”

Model 2 — The Governance Gap:

“The post says ‘How AI outcomes can be made accurate and reliable.’ But it never asks: Who decides whether an outcome is acceptable? Who is accountable when it’s wrong? Who overrides the model? They are solving for ‘How do we make the model more accurate?’ The real question is: ‘How does an organization govern decisions informed by AI?’ Those are not the same problem.”

Model 3 — The Accountability Gap:

“This is an important shift in tone from typical AI marketing. But governance is implied, not explicit. Knowledge Graphs + RAG improve explainability, but the post doesn’t name the full steering wheel: decision rights, verification protocols, escalation paths, or accountability when ‘zero-defect’ claims fail in production. Without these, even trustworthy architecture can become certainty theater.”

Model 5 — The Business Model Gap:

“They’re solving for ‘trustworthy systems.’ The real question is ‘governable decisions.’ Architecture doesn’t determine success — organizational readiness does. Until vendors connect those two, the industry will keep repeating the same cycle: impressive tech, great webinars, high expectations, low absorption, high failure rates.”

Model 6 — The Readiness Gap:

“The 2026 ‘governance-first’ shift is real: AI Governance has become a board-level imperative. But ‘Human-in-the-Loop’ and ‘phased autonomy’ only work if organizations have assessed their readiness to absorb what AI reveals. The question remains: Can this organization actually act on what AI shows them?”


The Pattern Across Every Model

All five models identified the same disconnect:

What Fujitsu AddressesWhat Remains Unaddressed
Architecture (Knowledge Graphs + RAG)Governance (decision rights, escalation)
Accuracy (multimodal, explainability)Accountability (who owns errors)
Technology designOrganizational readiness
“Zero-defect operations”Bounded risk + detection + containment

The models converged on a single insight:

Trustworthy AI is as much an organizational design problem as a technical one.


Why This Matters Beyond Procurement

This isn’t a procurement problem. It’s an enterprise problem.

The same 70-80% failure rate has followed every technology wave since 1995 — ERP, e-procurement, digital transformation, and now AI. The technology changes. The failure pattern doesn’t.

Why?

Thirty years ago, Gabriel Szulanski answered this question in research that has been cited over 28,000 times:

“Contrary to conventional wisdom that blames primarily motivational factors, the major barriers to internal knowledge transfer are knowledge-related factors such as the recipient’s lack of absorptive capacity, causal ambiguity, and an arduous relationship between the source and the recipient.”

— Szulanski, G. (1996). “Exploring Internal Stickiness.” Strategic Management Journal

Translation: Best practices fail to transfer — even within the same company — because the barrier isn’t the technology. It’s whether the recipient organization can absorb it.

Fujitsu’s Knowledge Graph + RAG architecture may be sound. But architecture doesn’t determine success. Organizational readiness does.


A Note on Fairness

I was drawn to this webinar but couldn’t attend live due to the time zone difference. By the time I accessed it, the session had ended.

The promotional outline didn’t address organizational readiness or the governance gaps the models identified — but that doesn’t mean it didn’t come up in the discussion. For that reason, I’m including the link to the on-demand recording and encourage you to listen. I know I will.

👉 Trustworthy AI: Achieving Accuracy and Trust in Critical Decisions

If the panel addressed these questions — Who decides? Who verifies? Who’s accountable? — I’ll be the first to update this post.

As I always say: I would rather get it right, than be right.


The Question That Remains

Before your next AI investment — whether in procurement, finance, supply chain, or enterprise-wide — ask:

Has anyone assessed whether our organization can absorb what we’re about to deploy?

If the answer is “no,” you’re not ready for trustworthy AI.

You’re ready for the same 80% failure rate that’s followed every technology wave for 30 years.


Related: The Szulanski Paradox: Why “Innovator Showcase” Webinars Can’t Deliver What They Promise


What do you think? Is trustworthy AI a technology problem or an organizational readiness problem?

-30-

Posted in: Commentary