When MIT, BCG, McKinsey, and a 27-Year Archive Arrive at the Same Finding Without Coordination, That Is Not a Coincidence

Posted on April 5, 2026

0


Posted in: Procurement Insights · CIO Insights · CFO Insights


Earlier this week, Martin G. Choquette published a post that stopped me. Not because it was surprising — but because it wasn’t.

Choquette synthesized findings from MIT (300+ enterprise deployments), BCG (1,000 executives across 59 countries), McKinsey (nearly 2,000 organizations across 105 countries), Stanford HAI, and IDC. His conclusion: 95% of AI pilots fail to deliver P&L impact. The problem isn’t the AI. It is, in his words, “an organizational design crisis wearing a technology costume.”

Four patterns keep showing up in the research: treating AI as a project rather than a capability, layering AI onto broken processes, measuring pilot volume instead of outcomes, and building sandboxes instead of systems.

BCG’s data point is the one that should end the debate: 70% of AI success comes from people, process, and culture. Most companies spent 90% of their budget on the 10% that matters least.

Different institutions. Different datasets. Same conclusion.

The Stanford HAI case is the one that confirmed everything. One company. Two AI projects. Same team. Same technology. Same budget. One succeeded. One failed completely. The difference had nothing to do with the AI. The successful project had clear ownership and unambiguous decision rights. The failed project had fragmented accountability across 200 managers with no one who owned the outcome.

Same technology. Different organizational conditions. Opposite results.

Same Platform. Opposite Outcomes. The variable was organizational readiness and methodology — not technology. The only difference between the Stanford HAI case reference and the Procurement Insights archives is that the archives were referencing 2007 and 2010, while the Stanford case reference period was 2019 and 2023. Procurement Insights is the foundational proof of the Phase 0 model’s lineage and enduring relevance.

Virginia eVA (Success — Methodology Present): Spend adoption 1% → 80%. Supplier base 20K → 34K. Contract distribution 23% → 40%. OECM (Failure — Methodology Absent): Same Ariba platform. Modeled on eVA success. No Phase 0™ readiness assessment. $20M lost.

Archive sources: Yes Virginia! There Is More to E-Procurement Than Software! (Part 1) — September 2007 Yes Virginia! There Is More to E-Procurement Than Software! (Part 2) — September 2007 OECM Punts Ariba, Taking a $20 Million Dollar Hit In The Process? — September 2010

I have one question for the procurement and enterprise technology community reading this: where have you seen that finding before?


The 1998 Version of the Stanford HAI Case

In 1998, at Canada’s Department of National Defence, I documented the following: one organization, one technology initiative, same budget, same procurement system. Delivery performance was running at 51 percent. The proposed solution was to automate the procurement process.

Before any technology was deployed, I asked a single question: what time of day do orders come in?

The answer — 4 PM — revealed that service technicians were sandbagging parts orders until end of day because their performance metric rewarded call response numbers, not call close rates. The organizational condition generating the signals that the technology would have been deployed to manage was structurally misaligned. The accountability structure was fragmented across two departments with no one who owned the end-to-end outcome.

Phase 0™ surfaced the condition. The organizational structure was addressed before the technology was deployed. Delivery performance went from 51 percent to 97.3 percent in three months and sustained for seven years.

No new technology. Same team. Same budget. Different organizational conditions. Opposite results.

Stanford HAI documented this in AI in 2026. The Procurement Insights archive documented it in procurement in 1998. Neither research stream knew the other existed when it arrived at the finding.

That is not a coincidence. That is convergence.


Seven Eras of the Same Finding

The enterprise technology failure rate has held between 55 and 75 percent across seven distinct technology eras: ERP, e-procurement, SRM, cloud analytics, digital transformation, the first wave of AI, and now agentic AI.

In every era, the capability advanced. In every era, the major research firms — Gartner, McKinsey, Deloitte, and now MIT, BCG, and Stanford HAI — documented the same organizational failure pattern. In every era, the response was a new capability layer deployed on top of the undiagnosed organizational condition.

Choquette identifies the pattern precisely: they layered AI onto broken processes. BCG’s data confirms it. IDC’s 88% proof-of-concept attrition rate measures it. The Stanford HAI case makes it undeniable.

The Procurement Insights archive has been documenting the same pattern since 2007 — across ERP, e-procurement, supplier relationship management, cloud analytics, and now AI. The language changes with every era. The physics, as I wrote in December 2025 tracing the journey from the 1998 Metaprise model to today’s agentic ecosystems, does not.

Across seven technology eras, the capability improved. The failure rate did not.


What RAM 2025™ Treats as a Signal

RAM 2025™ treats independent convergence as the strongest signal of structural truth.

The convergence documented in this post is RAM 2025™ logic applied at a global research scale.

MIT, BCG, McKinsey, Stanford HAI, and IDC did not coordinate their research. They approached the AI deployment failure question from different methodologies, different sectors, different geographies, and different organizational scales. They arrived at the same structural finding: organizational readiness — not technology capability — is the determining variable.

The Procurement Insights archive approached the enterprise technology deployment failure question from independent longitudinal research across seven technology eras. It arrived at the same structural finding in 1998, confirmed it across every subsequent era, and documented it across more than 3,300 published pieces over eighteen years.

When independent research streams converge on the same finding without coordination, across different domains, different methodologies, and different time horizons — RAM 2025™ treats that as the strongest possible signal that the finding is structural rather than coincidental.

The finding is structural. Organizational readiness determines technology outcomes. It has always determined technology outcomes. The research is now saying it loudly enough that the C-Suite cannot continue to treat it as a procurement-specific observation.


The Wave Pattern Beneath the Finding

There is one more layer worth naming — and it is the one that explains why this finding keeps being rediscovered rather than acted on.

The DEI movement has ebbed and flowed since the 1950s. Supplier diversity programs surged after the 1968 Detroit riots, after the George Floyd tragedy in 2020, and are now ebbing with the March 2026 executive order. Each wave produces new programs. Each trough sees them reduced. The organizational conditions that would make the programs durable across waves — documented in the Supplier.io white paper for which I did the research in 2024 — are never formally assessed before the commitment is made. So each new wave starts from the same structural baseline as the last one.

The AI governance research wave Choquette’s post represents is following the same pattern. A surge of research confirming that organizational readiness is the determining variable. Significant executive attention. Significant investment in the capability layer. And, based on seven prior eras of evidence, a high probability that the organizational conditions beneath the capability layer will not be formally assessed before the commitment is made.

Every wave produces new solutions. None address the condition that survives the wave.

Phase 0™ was built for exactly this moment — not to be the wave, but to assess the foundation before the wave breaks. The DND proof case was built at the trough of one wave and produced results that sustained through the next seven. That is what wave-agnostic organizational diagnostics produce: outcomes that persist regardless of which technology era arrives next.


The Uncomfortable Implication

Choquette closes with this: “The uncomfortable truth: layering AI onto unclear structures doesn’t create leverage. It amplifies friction.”

The archive’s version of that statement, documented across seven eras: the technology does not create the organizational condition. It reveals it — at the speed and scale of whatever capability has just been deployed.

Agentic AI reveals the organizational condition at machine speed. Which means the cost of leaving the pre-commitment question unasked has never been higher — and the research confirming that the question needs to be asked has never been more convergent.

The question that has followed this archive since 1998 remains the one no agentic AI system is designed to ask before it acts: Tell me how agentic AI would have known to ask, “What time of day do orders come in?”

MIT says it. BCG says it. McKinsey says it. Stanford HAI says it. The 1998 DND proof case said it first.

The pre-commitment question — are the organizational conditions in place before the technology is deployed — is not a procurement question. It is not an AI governance question. It is the question that determines whether any technology, in any era, produces the outcomes it promises.

That question has a framework. It has a 27-year evidence base.

The question is no longer whether this matters. The question is whether it gets asked before the next commitment is made.


Related reading:

From Metaprise to Agentic Ecosystems: The 27-Year Journey to Architectural Truth

James, About That 75% Number

What the 2020 and 2023 Tealbook Interviews Missed — And What the Supplier.io White Paper Already Knew

Martin G. Choquette’s post: 95% of AI pilots fail to deliver P&L impact


Book a readiness conversation: calendly.com/jon-toq/30min

For more information on Hansen Models™: hansenprocurement.com


Phase 0™ · HFS™ Hansen Fit Score™ · RAM 2025™ · Hansen Models™ 18 years · 3,300+ documents · Zero vendor sponsorships · Zero paid analyst relationships

-30-

Posted in: Commentary