In 2005 We Called It. In 2026 MIT, McKinsey, and Stanford HAI Confirmed It. Here Is the Original Record.

Posted on April 8, 2026

0


There is a question that follows every post in this series.

Where did this come from? How long has it actually been true?

And why did it take 21 years for the market to catch up?

The answer is on video. Recorded in Calgary in 2005 at the PTDA Canadian Conference. Presented to a room of industrial distributors as a keynote on reverse auctions and procurement transformation.

What follows is not a highlight reel. It is the original record — timestamped, unedited, and more relevant today than the day it was delivered.


What You Are About To Watch

In 2005, the enterprise technology market was cycling through its second major wave of procurement transformation. eProcurement, reverse auctions, SRM platforms, and catalog compliance tools were promising to revolutionize how organizations bought. The failure rate was already running at the same 55–80% band it occupies today. The advisory model guiding those deployments was already producing the same outcome it produces in 2026.

What the video shows in real time, the next 21 years would validate at scale.

Seven findings from the 2005 keynote are worth naming before you press play — because each one maps directly to what MIT, McKinsey, Stanford HAI, and RAND are publishing today.

1. The failure rate never moved. 75% of e-business initiatives were already failing — Covisint, Ford’s Everest, the Veterans Administration’s two initiatives totaling $650M scrapped after seven years. One audience member corrected the figure upward to 85%. The 2026 data from RAND and MIT puts it at 80.3%. In 21 years, the number barely shifted.

2. The root cause was behavioral, not technological. Every failure cited was not a technology failure. The technology worked. The deployments failed because they were made into conditions that could not sustain the outcome — buyers who would not adopt the system, front-line resistance that no platform could override, incentive structures that made off-contract buying more rational than compliance.

3. Catalog compliance was structurally broken. 79% of MRO purchases were going off-contract. Not because buyers were negligent. Because the systems were built around how procurement should work rather than how it actually works. That gap — between the equation-based model and the agent-based reality — is the core finding of the 1998 SR&ED-funded research that preceded this keynote.

4. The process savings were always larger than the price savings. example of 23 buyers reduced to 3 — not through cuts but through process efficiency. The 3M study showing 70% of procurement department time spent on 2% of dollar spend. The biggest savings were never in cost of goods. They were in understanding how the work actually gets done before trying to make it faster.

What that finding means in 2026 — and why it matters more than ever.

The 3M data point is not a historical curiosity. It is the structural map of exactly where AI is being sold versus where it can actually deliver.

The 70% of time spent on 2% of dollar spend is the high-volume, low-complexity, transactional tail — bounded, rule-based, and genuinely automatable. That is the work autonomous AI can handle reliably. It is the 30% of procurement resources managing 98% of dollar spend — the strategic, complex, high-stakes category decisions — where autonomous AI fails for the same reason the equation-based procurement systems of 2005 failed.

The conditions, incentives, and hidden variables that determine outcomes in complex spend cannot be captured in an algorithm. They have to be identified. By a human. Before the system runs.

The vendors selling AI to procurement and supply chain organizations today are selling it as a solution to the 70% problem while implying — and sometimes explicitly claiming — that it will also solve the 30% problem. That is the mismatch. It is the same mismatch that produced $650M in failed implementations in 2005 and $547 billion in failed AI investments in 2025.

Based on our own real-world experience with RAM 2025™ multimodel validation, autonomous AI will not reliably achieve strategic outcomes in complex environments without human oversight. The models provide analytical depth, pattern recognition, and adversarial scrutiny. The human provides judgment, timing, relationship intelligence, and the capacity to ask the question the system did not know it needed.

Remove the human from that equation and you do not get better decisions. You get faster ones — pointed in the wrong direction.

5. IBM was already reading the archive. By 2005, IBM had been visiting our website to download studies on the agent-based versus equation-based model — and even then, the market did not internalize the finding. The signal was visible to major institutions years before the convergence. It changed nothing about what the market chose to deploy.

6. The equation-based versus agent-based distinction was already the thesis. Systems built on equations describing how procurement should work will always fail against the reality of how procurement actually works. That is the intellectual foundation of every framework that followed — Phase 0™, HFS™, RAM 2025™, Hansen Strand Commonality™.

7. The pre-commitment window was already the intervention point. Not during implementation. Not after go-live. Before the decision is made — when the outcome is still changeable. That principle was not developed in response to AI. It was developed in response to procurement technology failure in 1998, applied at the Department of National Defence, and documented publicly in Calgary in 2005. It has been the governing principle of every engagement since.


Part 1 — The Original Record


Part 2 — The Original Record Continued


What 21 Years Proved

Over the past year, MIT, McKinsey, BCG, and Stanford HAI have independently arrived at the same structural conclusion: organizational readiness and the pre-commitment conditions for success must be assessed before the technology decision is made — not during implementation, not after go-live. The constraint is never the technology alone. It is the conditions, logic, and cross-boundary realities the technology has to operate within.

The Stanford HAI AI Index documented it. McKinsey published it. BCG confirmed it. And the former Director of the Stanford HAI AI Index reached out to Hansen Models™ — not the other way around — and said within twenty minutes: “You have a better idea of what I’m doing than I do.”

The Department of National Defence proved it in 1998 — moving delivery performance from 51% to 97.3% in 90 days, before a single piece of technology was introduced, by asking a question nobody had thought to ask: what time of day do orders come in?

In 2005, the Veterans Administration had just scrapped $650M in initiatives because front-line buyers would not use them. In 2025, global enterprises lost $547 billion in AI investments that failed to deliver intended business value. Same mechanism. Same missing diagnostic. The cost simply scaled with the technology.

Peer-reviewed papers are now in motion with industry thought leaders. The institutional convergence is documented and accelerating.

The finding itself is not new. It has been true since 1998. It was on record in 2005. And the collective 27-year Procurement Insights archive — 3,300+ independently produced documents, zero vendor sponsorships, zero paid analyst relationships — is the longitudinal evidence base that no incumbent advisory firm has ever built.

The pattern was visible in 1998. It was documented in 2005. It has now been independently confirmed in 2026.

The question is no longer whether it is true.

The question is whether organizations will act on it before repeating it again.


Phase 0™ is the pre-commitment organizational readiness diagnostic. It exists in the only window where the outcome is still changeable — before the commitment is made. And the commitment should never be made until the right outcome — the real outcome — is achieved.

Hansen Models™ is not asking for a seat at the table. The archive built the table. The work validated it. The market is now arriving at the same address.

Regardless of the stage you are presently at, if your organization is navigating an AI commitment in 2026 and you want the diagnostic that the incumbent advisory ecosystem was never designed to provide — the conversation starts here:

Book a 30-Minute Readiness Conversation

-30-

Posted in: Commentary