Five AI Models, One Timeline, Same Conclusion:

Posted on December 18, 2025

0


Why 35 Years of Procurement Technology Didn’t Fix the Failure Rate

Matthias Gutzmann recently posted an elegant timeline showing the evolution of procurement technology — from the first procurement apps through the suites era to today’s emerging tech landscape.

I asked RAM 2025 — five independent AI models — a simple question: What are the failure rates for each era, and why is a graph like this useful?

Their answers converged on the same uncomfortable truth.

The Original Timeline

What the Timeline Doesn’t Show: Investment vs. Failure

What Five AI Models Found

Model 5: The Pattern

“Three generations of technology. Billions in funding. Dozens of acquisitions. The failure rate got worse, not better.”

Failure rates: First Apps 60-70% → Suites Era 60-70% → Emerging Tech 70-95%

Key insight: “The graph tracks what vendors built — not whether organizations could absorb it. This is the CB Insights map problem again: beautiful supply-side visualization, missing the Phase 0 layer entirely.”

Model 6: The Diagnostic Tool

“This visual timeline is more than a historical record; it is a diagnostic tool for the modern executive. Its utility lies in revealing the ‘physics of failure’ that has persisted for 35 years.”

Reported “Success” (Go-Live) vs. Actual Outcome Success (ROI/Value):

First Apps: 60-70% go-live → <30% actual value. Suites Era: 50-60% go-live → ~20% actual value. Emerging Tech: <40% go-live → 12-30% actual value.

Model 1: The Uncomfortable Truth

“This graph is useful not because it shows how procurement technology evolved — but because it shows how little initiative success evolved despite it.”

Success rates by era: 1990s Apps ~25-30% → 2000s ERP ~15-25% (worst era) → 2010s Cloud ~30-40% → 2020s AI ~30-35%

Key insight: “Each generation solved the previous generation’s technical pain — but none solved governance. That’s why Phase 0 matters.”

Model 2: The Two Timelines

“Its real value lies in what it omits — the ‘customer experience’ column. Without failure rates overlaid, it perpetuates hype by focusing on ‘what vendors built’ while ignoring ‘what buyers experienced.'”

Failure rates: First Apps ~70% → Suites Era ~75% → Emerging Tech ~80%. Rates worsened despite technological advances due to persistent organizational unreadiness.

Model 3: The Evidence

“On each line of that timeline, the technology generation changes but the implementation odds barely move.”

Cited sources: BCG (60-80% digital transformation failure), NTT Data (70-85% GenAI deployment failure), MIT/Fortune (95% AI pilot failure)

Key insight: “It flattens three decades of 60-80% failure into a neat progression — precisely the kind of ‘history as feature list’ that readiness models are meant to puncture.”

Where All Five Models Converged

Despite different architectures, training data, and reasoning approaches, all five models reached the same conclusions:

1. Failure rates stayed constant (or worsened) across all three eras

2. Technology was never the limiting factor

3. Governance, readiness, and accountability were never addressed upstream

4. The graph shows vendor evolution, not practitioner outcomes

5. Phase 0 (readiness assessment) is the missing layer

The Bottom Line

If the technology kept getting better, why did the failure rate keep getting worse?

Because the graph tracks what vendors built — not whether organizations could absorb it.

Thirty-five years. Three generations. Billions invested. And we’re still asking the wrong question.

The question isn’t “which technology?” It’s “are we ready?”

-30-

BONUS SECTION: Do the Procurement Insights Archives Support These Findings?

I asked one more question: Does the Procurement Insights archive (2007-2025) validate what these five AI models independently concluded?

The answer: Yes — and the archives predicted these findings 18 years ago.

Claim 1: “Failure rates stayed constant or worsened”

Archive Evidence: Dale Neef (2001): “75% of all e-Procurement initiatives would fail.” Procurement Insights (2015): “85 percent of all e-Procurement initiatives of the enterprise era failed.” Multiple posts (2025): “80% failure rate” and “70-95% AI pilot failure.”

Verdict: 24 years of documented evidence.

Claim 2: “Technology was never the limiting factor”

Archive Evidence (June 28, 2007): “The actual software you use has very little to do with the success of your procurement initiative. In reality, the greatest hindrance to mainstream adoption of innovative procurement practice is the direct result of what I refer to as the hierarchical implementation mechanism.”

Verdict: 18 years of consistent documentation.

Claim 3: “Governance and readiness never addressed upstream”

Archive Evidence (June 28, 2007): “Most purchasing departments ‘inherit’ their software as an adjunct downstream byproduct of either an original finance or IT-centric initiative. This has usually meant that their input had been relegated to the realm of the afterthought versus providing decisive and proactive input when it matters the most – prior to a decision being made.”

Verdict: The exact pattern five AI models identified in 2025 — documented in 2007.

Claim 4: “Graph shows vendor evolution, not practitioner outcomes”

Archive Evidence (November 2010): “The focus for the majority of those who cover the industry regarding SAP as well as other ERP vendors has been centered on anything and everything but these failures… the question remains why is there a reluctance to tell it like it is?”

Verdict: 15 years of documented industry silence on practitioner outcomes.

Claim 5: “Phase 0 is the missing layer”

Archive Evidence (June 28, 2007): “It is at this very point when a foundational understanding exists and a consensus has been reached between stakeholders that technology (whether existing or proposed) can be introduced as a means of driving greater efficiency in the supply chain.”

Verdict: Phase 0 described in 2007 — 18 years before it was formally named.

What the Archives Prove

The Procurement Insights archives don’t just support the five AI models’ findings — they predicted them.

• The pattern was visible before the Suites Era peaked

• The pattern persisted through the Cloud/SaaS transition

• The pattern is repeating in the AI/Emerging Tech era

• The solution (readiness assessment before technology) was identified before the problem was fully measured

The archives are the institutional memory, demonstrating that the RAM 2025 AI MODELS are pattern-matching against documented history.

The Bottom Line

If the technology kept getting better, why did the failure rate keep getting worse?

Because the graph tracks what vendors built — not whether organizations could absorb it.

Thirty-five years. Three generations. Billions invested. And we’re still asking the wrong question.

The question isn’t “which technology?” It’s “are we ready?”

Posted in: Commentary