Gartner and Forrester Tell You Which Vendor Is Best. But Best for Whom?

Posted on December 1, 2025

0


Look at these two images: the Forrester Wave and Gartner Magic Quadrant — the industry’s gold standard for evaluating procurement technology vendors.

Now, look at the Hansen Fit Score staircase.

What’s the difference?

The first two rank the instrument. The third measures the musician.


What Gartner and Forrester Measure

Both frameworks evaluate vendors on two dimensions:

  • Forrester: “Current Offering” vs “Strategy”
  • Gartner: “Ability to Execute” vs “Completeness of Vision”

They plot vendors against each other. Leaders in the upper right. Challengers, Contenders, Niche Players elsewhere.

The question they answer: “Which vendor is strongest?”

Useful for discovery. Silent on deployment success.


What’s Missing

You.

Neither framework asks:

  • What is your current organizational readiness?
  • Do you have the governance maturity to absorb this tool?
  • Is your data clean enough to feed it?
  • Can your stakeholders align around the change?
  • Are you structurally capable of extracting value from a “Leader”?

A company at HFS 58 buying Coupa — a “Leader” in both quadrants — isn’t buying a leader. They’re buying shelfware with a prestigious logo.


The Hansen Fit Score Difference

The HFS staircase flips the lens entirely.

Instead of asking “which vendor is best?”, it asks: “What is my organization ready for?”

  • Entry-Level (HFS 55-62): Precoro, Procurement Express, Tradogram, Compleat
  • Mid-Market Suites (HFS 60-66): Procurify, Planergy, Prokuria, Vroozi, Fraxion, ProcureDesk, Onventis, Medius
  • High-Velocity (HFS 67-72): Ramp, Airbase, Payhawk, Spendesk, PayEm, Flowie

The axis isn’t vendor capability. It’s readiness required.

Your ceiling is determined by your readiness — not your ambitions.


Static vs Dynamic

Here’s another critical difference.

Gartner and Forrester assign a vendor score once and apply it universally. Coupa is a “Leader” whether you’re a Fortune 500 with mature governance or a mid-market company still running procurement on spreadsheets.

The Hansen Fit Score is dynamic. It shifts based on:

  • Implementation scope: Deploying just P2P? HFS 65 might work. Going full-stack S2P — sourcing, contracts, supplier management, spend analytics? That minimum jumps to 80-90.
  • Organizational context: Same vendor, different scope, different readiness requirement.

This came up in a recent exchange with James Meads, who publishes excellent mid-market P2P vendor lists. He asked why I placed Onventis and Medius in the Mid-Market bucket when they’re more comprehensive S2P suites.

The answer: It depends on what the client is implementing.

If a mid-market organization is deploying just the P2P layer — requisitions, approvals, invoices — an HFS of 65 can work. But if they’re going full stack? That minimum jumps to 80-90. Same vendor. Different scope. Different readiness requirements.

That’s what static frameworks miss.


The 80% Failure Rate Exposed

The industry’s exposed little secret: 70-80% of enterprise technology transformations fail to deliver expected value.

Not because the software is bad. Coupa works. SAP Ariba works. GEP works.

They fail because:

  • Processes weren’t aligned
  • Governance wasn’t stable
  • Data wasn’t ready
  • Change fatigue was misdiagnosed
  • Stakeholder alignment wasn’t there

A procurement system amplifies your current state. If the state is messy, the platform becomes messy.

Kroger just did a $2.6 billion write-down on robotic fulfillment centers. The robots worked fine. The organizational readiness didn’t.

I documented this exact pattern in a 2008 white paper. Hershey, HP, Cadbury, FoxMeyer — hundreds of millions lost. Seventeen years later, the pattern hasn’t changed. Only the dollar amounts have.


Two Questions, Two Altitudes

Gartner and Forrester answer: “Which vendor should I consider?”

The Hansen Fit Score answers: “Am I ready to succeed with what I’m considering?”

The first question operates at the vendor evaluation layer.

The second operates upstream — at the readiness layer that determines whether any of those “Leaders” will actually deliver value.

One is about discovery. The other is about deployment success.


The Bottom Line

Leaders fail at the same rate as everyone else when readiness isn’t there first.

Before you book the next demo, before you run the next RFP, before you shortlist from any quadrant or wave — ask the question nobody else is asking:

What’s my Hansen Fit Score?

Your readiness determines your ceiling — not your ambitions.

Choosing a tool above your readiness band increases failure probability by 60-80%.

Failure is a choice. Choose wisely.


What do you think? Why does the industry need a practitioner-centric framework alongside the vendor-centric ones?

#procurement #procuretech #HansenFitScore #digitalprocurement #transformation #Gartner #Forrester #readinessfirst

Posted in: Commentary