“The Moment You Turn the Hansen Fit Score™ Into a Vendor Sales Tool, You Become Gartner”

Posted on March 4, 2026

0


Procurement Insights — Jon Hansen | March 2026


I had a conversation with my advisory team this week that I want to share, because it gets to the core of something the procurement industry doesn’t talk about openly.

Someone asked me — not for the first time — why we don’t monetize the Hansen Fit Score™ through the vendor side of the table. The logic is straightforward: ProcureTech providers have marketing budgets, short decision cycles, and a standing need for qualified leads and independent validation. A vendor will write a check in days. A practitioner takes weeks or months, navigating internal approval chains that can stall indefinitely.

After 42 years in this industry, I can tell you the economics are real. If we offered vendors a pay-for-validation model — or even a lead-generation service built on HFS data — the revenue would come faster and more predictably than anything the practitioner side produces.

We said no. And I want to explain why — not to congratulate ourselves, but because the reasoning matters to anyone who consumes analyst research.


The Structural Problem Nobody Discusses

The procurement technology analyst ecosystem runs predominantly on vendor revenue. This is not a secret. It is also not, in itself, a scandal. Analyst firms provide genuine expertise, and vendors pay for access to that expertise and the market visibility that comes with it.

But incentive structures have consequences — not because anyone is corrupt, but because funding sources shape what gets measured.

When the entity paying for the assessment has a commercial interest in the outcome, the instrument gravitates — over time, incrementally, often invisibly — toward measuring what the funder is good at. Technology capability. Feature breadth. Market execution. Vision completeness. These are real dimensions, and they are measured well.

What doesn’t get measured is everything that determines whether the practitioner succeeds after the purchase order is signed. Organizational readiness. Behavioral alignment. Governance maturity. Process absorption capacity. Outcome accountability.

Not because these dimensions are unimportant. Because the people funding the research have no commercial incentive to surface them.


The Math That Explains the Gap

The Hansen Fit Score™ framework measures four numbers. Traditional analyst models measure one.

That one number — technology capability — is where most vendors score well. A 7.5 out of 10. The platform is advanced. Everyone agrees.

But when you add the other three numbers — the HFS composite across all readiness dimensions (4.3), the minimum practitioner readiness score required for success (7.0+), and the typical practitioner score without a readiness diagnostic (4.5–5.5) — the arithmetic tells a different story.

Put simply, when vendor delivery strength is ~4.3 and typical organizational readiness is ~5.0, the effective implementation capacity sits well below a 7.0 threshold — unless readiness is lifted before contracting. That’s not ideology. It’s a structural constraint.

The one-number model doesn’t predict this. It can’t. It wasn’t designed to. It was designed to answer the question the funder is asking: “How does our platform compare?” Not the question the practitioner needs answered: “Will this work here, and how do we know?”


Why We Were Tempted

I’ll be honest about this. The practitioner side of the market is slow. Decision-making involves committees, budget cycles, internal politics, and institutional caution that can delay a $15,000 diagnostic engagement for months. Meanwhile, a vendor marketing head can approve the same amount in a single meeting.

Practitioners wear belts and suspenders. Vendors move at the speed of pipeline.

There is a version of Hansen Models™ that monetizes through the vendor side — offering lead generation, paid validation, sponsored assessments, or priority placement in the HFS framework. That version would generate revenue faster. It would also, within 18 to 24 months, produce scores that look increasingly like what the vendors want to hear rather than what the practitioners need to know.

That’s not a moral judgment. It’s a structural prediction based on 27 years of watching it happen to others.


Why We Said No

I’ve yet to see another framework in our space that measures all of those dimensions — not just technology capability, but implementation ecosystem, organizational readiness, and outcome accountability — without revenue dependency on any vendor assessed.

If vendors want to use the HFS internally to improve their delivery ecosystem, we welcome it — but they don’t get to fund the score.

That independence is not a marketing position. It is the mechanism that makes the scores predictive. The moment the funding source shifts from the people who bear the implementation risk to the people who sell the technology, the instrument stops measuring what matters and starts measuring what sells.

The 80% failure rate has persisted across seven technology eras, four waves of analyst methodology, and billions of dollars in vendor-funded research. The one thing that hasn’t been tried at scale is an assessment model funded entirely by the people whose careers depend on the outcome.

That’s what we’re building. It’s slower. It’s harder. And it’s the only version that can’t be compromised by the economics that compromised everything before it.


The Practitioner’s Responsibility

I want to say one more thing, because this isn’t only an analyst problem.

Practitioners bear some responsibility for the system they’re in. The institutional caution that makes practitioner-funded models difficult to build is the same caution that allows vendor-funded models to dominate unchallenged. When the procurement function takes months to approve a $15,000 readiness diagnostic but greenlights a $2 million implementation without an independent readiness score, the incentive structure that favors vendor-funded research isn’t just an analyst choice — it’s a market response to practitioner behavior.

The industry gets the analyst model it’s willing to pay for.

If practitioners want instruments that predict outcomes rather than rank capabilities, they need to fund them. Not because it’s fair. Because it’s the only way the math works.


Where This Leaves Us

We’re not going to tell you which vendor is “number one.” We’re going to tell you whether your organization is ready to succeed with the vendor you’re considering — and what specific gaps need to close before you sign.

That assessment has value precisely because no vendor paid for it. The day that changes, the scores stop meaning what they mean.

Some things are worth building slowly.

To understand how the Hansen Fit Score™ framework works in practice, here is our 7-minute overview:


The Hansen Fit Score™ framework, Phase 0™ readiness diagnostic, and RAM 2025™ validation methodology are available through Procurement Insights. No vendor sponsorship. No referral fees. No paid placements.

-30-

Posted in: Commentary