Today’s Question: It’s Not a Competition of Design. It’s a Competition of Purpose.

Posted on January 28, 2026

0


By Jon W. Hansen | Procurement Insights


Two graphics crossed my desk this week. Together, they reveal something the procurement technology ecosystem doesn’t want to talk about.

The first is a beautifully designed timeline titled “The Evolution of Procurement Technology.” It maps three decades of vendor history: FreeMarkets founded in 1990, Ariba in 1996, Coupa in 2006. Acquisitions, funding rounds, IPOs. The startup revolution. The suites era. The emerging tech era.

It’s comprehensive. It’s well-designed. It documents the supply side of procurement technology with precision.

The second graphic is titled “The Cost of Ungoverned Technology: Four Decades of Procurement Transformation Failures.” It maps the same three decades — but from the demand side. ERP era: 75% failure rate, $150B+ wasted. e-Procurement era: 70% failure rate, $200B+ wasted. Digital transformation era: 70-84% failure rate, $900B wasted annually. AI era: 80%+ projected, $2.3T+ and counting.

Cumulative global waste across all technology cycles: $3.5 trillion.

Same timeframe. Same industry. Completely different stories.

One celebrates what was built. The other measures what it cost.


The Question No One Is Asking

When you place these two graphics side by side, a question emerges that the procurement technology ecosystem has spent thirty years avoiding:

Does the first graph have any impact on the second?

The evolution of procurement technology documents who founded what, who acquired whom, who raised funding, and who went public. It’s the story vendors tell at conferences. It’s the story analysts track in their reports. It’s the story DPW celebrates on stage.

But here’s what that story doesn’t include:

  • How many organizations that implemented these technologies actually succeeded?
  • What percentage achieved the outcomes they were promised?
  • Did the innovation translate into value—or just into vendor valuation?

The first graph is journalism. It records what happened on the supply side.

The second graph is accountability. It records what happened to the organizations that bought what the supply side was selling.

For practitioners, only one of these matters.


The Analyst Grand Slam

This week, I was introduced to the Analyst “Grand Slam” – did you know there was an Analyst Grand Slam:

  • Forrester Wave™ Leader (Supplier Value Management)
  • IDC MarketScape Leader
  • Gartner Customers’ Choice (based on peer insights)
  • Gartner Magic Quadrant™ Leader

Four accolades. Four analyst validations. A rare achievement.

But here’s the question that list doesn’t answer:

Of the organizations that implemented a “Grand Slam” vendor’s platform in the last five years, what percentage achieved the outcomes they were promised?

What the Grand Slam MeasuresWhat It Doesn’t Measure
Vendor capabilityBuyer success
Analyst rankingImplementation outcomes
Feature completenessOrganizational readiness
Customer satisfaction with vendorCustomer achievement of business case
Market positioningValue realization

The grand slam is four different ways of evaluating the supply side. It says nothing about the demand side.


The Uncomfortable Pattern

Let me be clear: analyst recognition is hard-won. Vendors that achieve leadership positions have built serious platforms with real capabilities. The analysts who evaluate them do rigorous work.

But here’s the pattern no one wants to name:

The 80% implementation failure rate has persisted across every technology era — regardless of which vendors were leading the analyst rankings.

  • In the 1990s, SAP and Oracle dominated the ERP market. Failure rate: 75%+.
  • In the 2000s, Ariba led the e-Procurement market. Failure rate: 70%+.
  • In the 2010s, Coupa rose to prominence. Failure rate: 70-84%.
  • In the 2020s, AI-native platforms are ascending. Projected failure rate: 80%+.

The vendors change. The analyst leaders change. The technology changes.

The failure rate doesn’t.

This tells us something important: the constraint isn’t vendor capability. If it were, better vendors would produce better outcomes. They don’t — at least not at the aggregate level.

The constraint is something the analyst rankings don’t measure: organizational readiness to govern what they adopt.


What Analyst Rankings Actually Evaluate

To be fair to the analysts, let’s be precise about what their frameworks measure:

Gartner Magic Quadrant™:

  • Completeness of vision
  • Ability to execute
  • Market understanding
  • Product/service capabilities

Forrester Wave™:

  • Current offering
  • Strategy
  • Market presence

IDC MarketScape:

  • Capabilities
  • Strategies
  • Market success factors

Gartner Peer Insights:

  • Customer ratings of vendor experience
  • Willingness to recommend

These are rigorous evaluations of vendor capability and market positioning. They help buyers understand which vendors have strong products and clear strategies.

What they don’t evaluate:

  • Whether buyers are ready to implement what they’re buying
  • Whether the buying organization has the governance structures to succeed
  • Whether the promised outcomes are achievable given organizational constraints
  • What percentage of implementations actually deliver the business case

Analyst rankings evaluate how good the car is. They don’t measure how many drivers crashed.


The DPW Responsibility

Digital Procurement World has become the premier gathering for the procurement technology ecosystem. They convene vendors, practitioners, analysts, and investors. They celebrate innovation. They document the evolution.

Their timeline graphic — “The Evolution of Procurement Technology” — is a perfect example of this role. It’s well-researched, visually compelling, and historically accurate.

But it raises a question about purpose.

If DPW’s mission is to advance procurement, do they have a responsibility to track not just who’s building, but whether what’s being built is working?

Consider what DPW currently celebrates:

  • Vendor funding rounds
  • Product launches
  • Acquisitions and exits
  • Analyst rankings

Consider what DPW could measure:

  • Implementation success rates by vendor
  • Time-to-value benchmarks
  • Outcome achievement percentages
  • Failure pattern analysis

The first list documents the supply side. The second list would serve the demand side.

The evolution of procurement technology matters only if it’s improving outcomes — not just offerings.


The Competition of Purpose

This brings us to the core distinction.

The procurement technology ecosystem has built an elaborate infrastructure for celebrating design:

  • Beautiful vendor timelines
  • Prestigious analyst quadrants
  • Award ceremonies at conferences
  • Funding announcements and IPO celebrations

All of this evaluates capability, positioning, and market success.

But capability is not purpose. Market success is not buyer success.

The competition that matters isn’t design. It’s purpose.

  • Design asks: How good is this technology?
  • Purpose asks: Does this technology achieve outcomes?
  • Design asks: Which vendor is leading?
  • Purpose asks: Which implementations are succeeding?
  • Design asks: What features does this platform have?
  • Purpose asks: Can this organization govern what this platform reveals?

The procurement technology ecosystem has been optimized for design. It has not been optimized for the purpose.

$3.5 trillion in cumulative waste is the evidence.


What “Leadership” Should Mean

Over the past few weeks, several vendors asked an important question: How should leadership in this new era be defined?

Here’s my answer:

Leadership should be defined by implementation success, not analyst ranking.

A vendor that achieves 80% implementation success with modest analyst rankings is more valuable to practitioners than a vendor that achieves analyst leadership with industry-average failure rates.

A conference that tracks outcomes is more valuable than one that celebrates funding rounds.

An ecosystem that measures demand-side outcomes is more valuable than one that documents supply-side evolution.

This isn’t a criticism of analysts, vendors, or conferences. They’re doing what they’ve always done — evaluating and celebrating capability.

It’s a call for a new metric: Did it work?

The Corvette in the Lake

Google’s Senior Product Manager Patrick Marlow put it bluntly in an exchange we had back in 2024:

“Imagine buying a new Corvette, running it into the lake, then blaming Chevy for it not being able to float. Replace Corvette/Chevy with LLM/any model provider, and that’s exactly what is happening across the industry. People are attempting to deliver projects with tech they haven’t taken the time to truly understand.”

This is the dynamic the analyst rankings miss entirely.

The quadrants evaluate the Corvette — horsepower, handling, design, market positioning. They don’t evaluate whether the buyer knows how to drive, whether the roads are ready, or whether someone is about to aim it at a lake.

The vendor built a capable car. The implementation drove it into the water. The failure rate stays at 80%.

Marlow’s observation from inside Google confirms what the $3.5 trillion in waste already tells us: the constraint isn’t the technology. It’s the organizational readiness to use it.

(For the full exchange: An Important Exchange on AI with Google’s AI Champion)


The Practitioner’s Question

If you’re a CPO preparing to approve a $15 million AI implementation, the analyst rankings tell you which vendors have strong capabilities.

They don’t tell you:

  • What percentage of organizations like yours succeeded with this vendor
  • What governance structures were present in successful implementations
  • What readiness gaps caused failures in similar organizations
  • Whether your organization is prepared to metabolize what this technology will reveal

Before you buy, ask the question the ecosystem isn’t asking:

“What’s this vendor’s implementation success rate — and what distinguished the successes from the failures?”

If the vendor can’t answer, that’s a signal.

If the analyst can’t answer, that’s a gap.

If the ecosystem can’t answer, that’s a $3.5 trillion problem.


The Two Timelines

The Evolution of Procurement Technology is a supply-side history. It documents what was built.

The Cost of Ungoverned Technology is a demand-side history. It documents what it cost.

The first timeline only matters if it bends the second.

Thirty years of vendor innovation, analyst evaluation, and ecosystem celebration have not changed the fundamental outcome: 70-80% of implementations fail to deliver promised value.

The technology keeps advancing. The failure rate holds steady.

This tells us the constraint isn’t technology. It isn’t vendor capability. It isn’t analyst rigor.

The constraint is organizational readiness — and no one is measuring it.


A Different Kind of Leadership

To the vendors, analysts, and consultants who ask me what leadership looks like in the new AI era:

Here’s what I believe leadership looks like:

Leadership is a vendor that publishes its implementation success rate, not just its analyst ranking.

Leadership is a conference that tracks outcome achievement — not just attendance and funding announcements.

Leadership is an analyst who evaluates organizational readiness — not just vendor capability.

Leadership is a CPO who asks, “Are we ready to govern this?” — not just “Which vendor should we buy?”

Leadership is an ecosystem that measures purpose — not just design.

The procurement technology industry has spent thirty years building better cars. It’s time to start measuring how many drivers succeed.


The Bottom Line

Two graphics. Same timeframe. Different stories.

One celebrates what was built. The other measures what it cost.

The first graph matters only if it affects the second.

Until the ecosystem starts measuring implementation success with the same rigor it applies to analyst rankings, we will continue to see:

  • Beautiful vendor timelines
  • Prestigious analyst quadrants
  • Grand slam recognition years
  • And $900 billion in annual transformation waste

It’s not a competition of design. It’s a competition of purpose.

And purpose is measured by outcomes — not accolades.


Jon Hansen is the creator of The Hansen Method® and founder of Hansen Models™, helping organizations prevent the 80% implementation failure rate through Phase 0™ readiness assessment.

-30-

Posted in: Commentary