The AI Implementation Gap: 2020-2025 — Or Shouldn’t We Be Measuring Success Instead of Documenting Failure?

Posted on January 15, 2026

0


Why $30B+ Will Be Wasted in 2025 — And What Can Be Done About It

By Jon Hansen | Procurement Insights | January 2026


Section 1: The Visual Business Case

The following two graphics tell a story that $30 billion in wasted AI investment cannot hide.

The Problem: Spending Explodes, Failure Rate Climbs

Between 2020 and 2025, enterprise AI spending increased 25-fold — from $1.5 billion to $37 billion. During the same period, implementation failure rates climbed from 70% to over 90%. The pattern is unmistakable: more spending has not produced better outcomes.

This analysis was validated by five independent RAM 2025 AI models using data from Menlo Ventures, MIT NANDA, RAND Corporation, S&P Global, Gartner, McKinsey, and IDC. This is a Level 4 of 5 Level RAM 2025 multimodel assessment.

The Solution: What If Organizations Had Measured Readiness First?

The green bars represent projected waste if organizations had applied Phase 0 readiness assessment before technology selection. Instead of failure rates climbing from 70% to 90%+, they remain stable at approximately 25%.

The math is straightforward: In 2025 alone, Phase 0 readiness assessment could have prevented over $21 billion in wasted investment. This projection is based on the Hansen Fit Score methodology (75-87% discrimination reliability) and validated by the Department of National Defence case study (97.3% delivery accuracy, 1998-2005).

The visual argument is clear: Red bars explode. Green bars stay short. The difference is Phase 0.


Section 2: The Anticipated MIT NANDA – RAND Pushback

“Analytical and scrutinizing; balanced endorsement with caveats. Analysts would dive into the methodology’s claims (e.g., 75-78% discrimination reliability, 65% failure drop), cross-referencing with sources like MIT NANDA or RAND. They’d appreciate the data-driven approach but question generalizability (e.g., from 1998 DND to 2025 AI). Positive if the 5-model verification holds; otherwise, they’d call for more peer-reviewed studies. The graph’s promotional tone might prompt ‘hype check’ reports.”

The above assessment comes from one of the five RAM 2025 AI models used in our validation process. We include it here because intellectual honesty demands we address the legitimate questions analysts will raise.

Let’s address them directly.

Objection 1: “Generalizability from 1998 DND to 2025 AI”

The DND case study (1998-2005) is foundational, not singular. The same methodology produced sustained success across five major implementations spanning 27 years, three countries, and five technology generations — from early web to cloud to SaaS to mobile to AI. The methodology’s durability across technology eras is itself the evidence of generalizability. Section 3 provides the complete evidence chain.

Objection 2: “Cross-referencing with RAND”

We welcome it. RAND’s 2024 report (“The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed”) identifies five root causes of AI failure: (1) Misunderstanding or miscommunication of the problem, (2) Lack of adequate data, (3) Over-focus on technology over problems, (4) Inadequate infrastructure, and (5) Applying AI to unsolvable problems.

Every single one of these is addressed by Phase 0 readiness assessment — a methodology documented in 1998, validated at DND with 97.3% delivery accuracy, 26 years before RAND published their findings. RAND is describing the disease. Phase 0 is the cure — administered since 1998.

Objection 3: “Peer-reviewed studies”

Fair request. But consider: the frameworks that have produced 80% failure rates for 30 years were never held to this standard. The burden of proof should be on methodologies with documented failure, not documented success. That said, we welcome independent verification. The evidence chain is open for scrutiny.

Objection 4: “Promotional tone / hype check”

Understandable concern. But there’s a difference between hype and documented outcomes. The 97.3% delivery accuracy isn’t a projection — it’s a seven-year measured result. The $338 million Virginia savings isn’t a forecast — it’s audited. The question isn’t whether we’re being promotional. The question is: Why has the industry normalized 80% failure while ignoring methodologies with documented success?


Section 3: The Evidence Chain (1998-2025)

The following five case studies document 27 years of methodology validation across three countries and multiple technology generations.

Five implementations. Three countries. 27 years. Five technology generations. Same methodology. Documented success.

The Common Thread

Each of these implementations succeeded because they prioritized stakeholder alignment and process understanding before technology selection. As documented in the 2007 Virginia analysis:

“eVA’s effectiveness has little to do with the technology and more to do with the methodology the Virginia brain trust employed. It is when technology (nee software) is seen as the primary vehicle to drive results that it becomes ineffectual and mostly irrelevant.”

This insight — documented in 2007 — directly anticipates RAND’s 2024 recommendation: “Industry leaders should focus on the problem, not the technology.”

The methodology didn’t predict RAND’s findings. It preceded them by 17 years.


Section 4: The Question

We anticipate that some analysts, consultants, and solution providers will question the claims presented here. They will ask for more peer-reviewed studies. They will question generalizability. They will raise concerns about promotional tone.

We welcome that scrutiny.

But we also ask them to apply the same standard to themselves.

For 30 years, the consulting and analyst ecosystem has sold frameworks that have produced documented failure rates of 70-85%. ERP implementations. Digital transformations. Procurement technology. And now AI. The pattern is consistent. The outcomes are consistent. The failure is consistent.

Where are the peer-reviewed studies validating those frameworks? Where is the evidence that their methodologies produce success? Where is their accountability for $30 billion in wasted AI investment in 2025 alone?

The burden of proof should not fall exclusively on methodologies with documented success. It should fall equally — if not more heavily — on methodologies with documented failure.


The question isn’t whether the Hansen Method meets peer-review standards.

The question is: Why are organizations still funding frameworks that have produced 80% failure for 30 years — while ignoring methodologies with documented success?


The graphics tell the story. The evidence chain validates it. The question now is yours to answer.

Jon Hansen Founder, Hansen Models | Creator, Hansen Method procureinsights.com | © 2026 Hansen Models

-30-

Posted in: Commentary