A Market Dojo co-founder assumed the Hansen Fit Score™ measures pricing, features, and usability — the same things every other assessment measures. It doesn’t. The HFS™ measures the five structural conditions that determine whether a technically capable platform actually delivers outcomes: ownership stability, outcome evidence, leadership continuity, geographic concentration, and integration risk. Nearly half the vendors on a recent “best S2C software” list couldn’t clear the evidentiary threshold for a full HFS™ structural risk assessment. Meanwhile, PE-owned ProcureTech CEO tenure has collapsed to 1.8 years — below the minimum sponsorship window needed to see an implementation through. Market Dojo’s three co-founders have been in place for 16 years. That’s not a testimonial. It’s a scored data point — and the kind of data point that no feature comparison, analyst quadrant, or AI-generated readiness matrix will ever surface. The 80% implementation failure rate doesn’t live inside capability gaps. It lives inside the structural conditions nobody else measures.
Earlier this week, James Meads published his “11 Best Source to Contract Software for Mid-Market (2026)” article on entproc.com. It’s a useful list. I said so publicly in the discussion thread and I mean it — feature comparisons have a place in the evaluation process.
What happened next in that thread, though, is why I’m writing this post.
After I shared a preliminary Hansen Fit Score™ assessment of the 11 vendors — including a full longitudinal overlay for Market Dojo — Nick Drewe, co-founder of Market Dojo, weighed in with this observation:
“The Hansen Fit Score – a new one to me! I think that’s essentially the process that buyers of procuretech undertake: ‘[insert buyer name] Fit Score’, albeit the criteria change based on the buyer… It seems the assessments here focus on pricing, implementation speed and usability.”
Nick is someone I’ve respected since December 2013 when Procurement Insights first began covering Market Dojo. He’s an engineer, a co-founder who has stayed in place for 15 years, and someone who understands procurement technology from the inside out.
He’s also wrong about what the Hansen Fit Score™ measures. And that misunderstanding — which is not his alone — is exactly the problem the HFS™ was built to solve.
The Assumption Everyone Makes
Nick assumed the HFS™ evaluates the same things that every other scoring framework evaluates: pricing, features, implementation speed, usability. The things you find in a Gartner Magic Quadrant. A Forrester Wave. A Spend Matters SolutionMap. An IDC MarketScape. Or, for that matter, in James Meads’ article.
That assumption is natural because those are the only kinds of assessments the procurement technology industry has ever produced. Every framework currently available to buyers measures some variation of what a platform can do.
The Hansen Fit Score™ doesn’t measure what a platform can do.
It measures whether it will actually work.
Different Question, Different Dimensions
Here are the five dimensions of the HFS™ scores. Notice what’s absent from traditional measurement frameworks:
Ownership Stability — Who owns the vendor? How has ownership changed? Is the company subject to PE exit timelines, acquisition integration pressure, or strategic redirection by a parent entity?
Outcome Evidence — Not vendor-reported ROI claims. Independently verified implementation outcomes with named organizations, measurable results, and third-party corroboration.
Leadership Continuity — Are the people who built the platform still running it? Has the executive team turned over? Founder-led companies with stable leadership produce measurably different implementation outcomes than companies on their third CEO in five years.
Geographic Concentration — Where does the vendor actually operate versus where they claim to operate? A platform with 90% of its customer base in one region presents different implementation risk for a buyer outside that region than its marketing materials suggest.
Integration/Platform Risk — Is the vendor mid-acquisition? Mid-integration? Being absorbed into a parent platform? Is the product you’re evaluating today the same product you’ll be running in 18 months?
You’ll notice there’s no “features” dimension. No “UI/UX” score. No “pricing competitiveness” rating. Those things matter — but they’re already measured by everyone else. What nobody else measures are the structural conditions that determine whether a technically capable platform actually delivers outcomes in a specific organizational environment.
That’s the gap. That’s what the 80% implementation failure rate lives inside.
The Market Dojo Proof Point
To demonstrate this, I took James Meads’ 11-vendor list and ran every vendor through a preliminary HFS™ assessment using only independently verifiable evidence — no vendor self-reporting, no analyst briefings, no marketing materials.
The first finding was the most important: 45% of the vendors on the list couldn’t be scored at all. Five of eleven have insufficient independently verified outcome evidence and limited or no longitudinal coverage from established procurement analysts to support a full HFS™ assessment. Under the HFS™ methodology, that’s not a neutral gap. It’s a potential elevated risk signal.
A buyer relying solely on Meads’ article would have no way to distinguish between vendors with deep evidence trails and vendors with none. The article treats all eleven as comparable options differentiated by features. The HFS™ reveals that nearly half of them can’t even clear the evidentiary threshold for a preliminary assessment.
But the more powerful demonstration comes from Market Dojo — because it’s one of the vendors on the list for which I could conduct a full HFS™ assessment.
What 12 Years of Independent Evidence Reveals
Procurement Insights has covered Market Dojo independently since December 2013. That coverage includes:
The Year in the Life Series (2014) — Market Dojo was selected as a New Wave Company candidate, with Dragon’s Den-style expert assessment of their market positioning, methodology, and growth trajectory.
Industry Collaboration (2014–2019) — Co-authored content with Market Dojo’s co-founder Alun Rafique on auction adoption variables. Joint webinar series on cloud-based procurement transformation. Multiple longitudinal touchpoints tracking the company’s evolution from bootstrapped startup to established platform.
The CPO Arena Panel (November 2020) — Five sitting CPOs evaluated Market Dojo’s platform live. Unanimous “worthy of a look” recommendation. Panel included Canda Rozier, who is in this very discussion thread. Michael Cadieux noted surprise at how well engineers had executed on UI. Hervé Legenvre highlighted client autonomy and response-time customer service.
The Logitech Case Study (2020) — A “Making The Case” investigation — not a vendor-supplied reference, but an independently conducted interview examining how Logitech repatriated procurement using Market Dojo’s platform.
RAM 2025™ Multi-Model Assessment (July 2025) — Level 1 assessment across the three Hansen Fit Model dimensions: Metaprise, Agent-Based, and Strand Commonality, with benchmarking against ORO Labs, Zip, Ivalua, Zycus, and SAP Ariba. That assessment placed Market Dojo’s HFS (3D) score in the 8–9 range on a 10-point scale.
Here’s why that matters for the current assessment: the July 2025 HFS (3D) evaluates three capability dimensions. The current February 2026 assessment is an **HFS (5D)** — a more stringent framework that adds two structural risk dimensions: Ownership Stability and Integration/Platform Risk. Those are precisely the dimensions where PE ownership, acquisition integration pressure, and leadership turnover live. The 5D is harder to score well on because it measures conditions that pure capability assessments don’t capture.
Market Dojo’s HFS (5D) Composite of 5.6 on an 8-point scale reflects that added rigour. But the Capability-to-Outcome Gap of 2.2 — where lower is better — is the inverted equivalent of the July 2025 HFS (3D) score of 8–9, where higher is better. Two independent assessments, seven months apart, using different measurement frameworks with different scales, arriving at the same conclusion about Market Dojo’s underlying strength. That’s not coincidence. That’s what methodological consistency looks like when the underlying evidence base is stable.
None of this information appears in James Meads’ article. None of it would surface in a feature comparison, a Magic Quadrant, or a SolutionMap. It exists because someone tracked the company longitudinally, across multiple assessment formats, for over a decade.
The Visual Story
The graphic below shows what that evidence does to the assessment.
The ghosted blue dot is Market Dojo’s position based solely on what James Meads’ article provides — sitting in the Systemic Implementation Risk zone at HFS 4.8, Gap 3.5.
The green dot is where 12 years of longitudinal evidence places them — crossing into the Manageable Risk zone at HFS 5.6, Gap 2.2.
The arrow between them is the value gap between a feature list and an evidence-based assessment. That delta — HFS +0.8, Gap narrowing by 1.3 points — is entirely attributable to evidence that no feature comparison, analyst quadrant, or solution map would surface.
Same vendor. Completely different risk picture.
And Here’s Where It Gets Honest
Even with 12 years of positive evidence, the full HFS™ assessment includes a PE FLAG — because the structural landscape has shifted in ways that matter.
Esker, Market Dojo’s parent company, was delisted from Euronext on March 3, 2025, following a €1.62 billion acquisition by Bridgepoint and General Atlantic. Esker is now PE-owned. Market Dojo’s four-year standalone operating period — negotiated as part of the 2022 acquisition — expires approximately Q1 2026. Full absorption is imminent.
This means a buyer choosing Market Dojo today isn’t buying from the bootstrapped, founder-led company that my CPO Arena panel evaluated in 2020. They’re buying from a subsidiary of a PE-owned entity navigating new ownership dynamics, with the product roadmap increasingly driven by Esker’s enterprise suite strategy rather than Market Dojo’s self-service DNA.
Does that make Market Dojo a bad choice? No. The evidence base remains strong. The founders are still in place. The platform has delivered verified outcomes.
But it means the risk profile has changed — and a buyer needs to understand what they’re actually buying into. No feature comparison will tell you that. No Magic Quadrant will flag it. The HFS™ does, because ownership stability is one of the five things it actually measures.
What Nick Got Right
Here’s the thing — Nick’s comment wasn’t entirely off base. He said the assessments seem to overlook “the customer service aspect” and “the people behind it.”
He’s right that people matter. In fact, Leadership Continuity is one of the five HFS™ dimensions — and Market Dojo scores well precisely because all three co-founders (Alun Rafique, Nick Drewe, and Nicholas Martin) are still in their roles after 15 years. That’s exceptional in procurement technology. It’s one of the reasons the full assessment moves them into Manageable Risk territory.
The chart below puts that in context.
Every line on this chart tells the same story — leadership tenure across the industry is falling. S&P 500 CEO tenure has dropped from 9.5 years in 2000 to 7.1 today. Functional C-suite roles (CMO, CPO, CTO) have fallen below 4 years. And PE-owned ProcureTech CEOs have collapsed to an average of 1.8 years — deep inside what Prosci’s change management research identifies as the Sponsorship Impossible Zone, where leadership doesn’t stay long enough to see an implementation through to outcome.
And then there’s the green line.
Market Dojo’s three co-founders have been in place since 2010. Through bootstrapping. Through the Esker acquisition in 2022. Through Esker’s own delisting and PE takeover in 2025. Through the approaching expiration of their four-year standalone operating period. Every structural transition that typically triggers leadership turnover in procurement technology — and all three founders are still there.
That’s not a testimonial. It’s a scored, longitudinal data point. Leadership stability leads to vision stability. Vision stability leads to execution capability. And execution capability is what converts platform features into implementation outcomes. Nick was right that people matter. The HFS™ just measures it instead of assuming it. Of course, the same stability on the practitioner side of the table is equally important for initiative success, which is addressed in The HFS™ practitioner score matrix.
So Nick intuitively identified a dimension the HFS™ already measures. The difference is that the HFS™ doesn’t treat “good people” as a subjective impression — it tracks leadership stability as a scored, longitudinal data point with predictive value for implementation outcomes.
But the industry-wide view only tells part of the story. To see the pattern at work inside procurement technology, specifically, you have to compare Market Dojo against the vendors buyers actually know!
This chart tracks defining leader tenure across three S2C vendors — SAP Ariba, Coupa, and Market Dojo — and the pattern is unmistakable.
SAP Ariba: Keith Krach co-founded Ariba in 1996, took it public in 1999, and departed in 2001 after just five years. What followed was three CEOs in six months. Robert Calderoni stabilized the company and built a genuine 10-year run — then SAP acquired Ariba for $4.3 billion in 2012, and the role collapsed into rotating General Managers with no founder connection to the original vision. Ariba has operated inside the Sponsorship Impossible Zone ever since.
Coupa: This is the most dramatic line on the chart. Rob Bernshteyn wasn’t a founder — Dave Stephens and Noah Eisner founded Coupa in 2006 and stepped aside by 2009. But Bernshteyn became the defining leader, building a 14-year tenure that included an IPO, category creation, and a genuine culture. Then Thoma Bravo completed its $8 billion acquisition in February 2023. Three months later, Bernshteyn was gone. The cliff is vertical. Leagh Turner, an external hire, was named CEO in November 2023. That line crashes from 14 years to zero and is now rebuilding from scratch.
Market Dojo: The green line just climbs. Sixteen years. Through bootstrapping, through Esker’s majority acquisition, through Esker’s own delisting and PE takeover, through the approaching expiration of the standalone operating period. Every inflection point that broke leadership continuity at Ariba and Coupa — Market Dojo’s founders absorbed and stayed.
The first chart shows Market Dojo’s stability against JAGGAER and the broader practitioner landscape. This one puts it next to two of the biggest names in procurement technology and asks a simple question: when ownership changes, does the leadership that built the vision survive? At Ariba, it lasted five years. At Coupa, fourteen. At Market Dojo — so far — sixteen and counting.
That’s not sentiment. It’s data. And it’s the kind of data that no feature comparison, analyst quadrant, or AI-generated readiness matrix will ever surface.
The Bigger Point
The Hansen Fit Score™ is not a better version of the Magic Quadrant. It’s not competing with Forrester Waves or SolutionMaps. Those frameworks measure capability — and they do it well within their methodological constraints.
The HFS™ measures something those frameworks were never designed to measure: the structural conditions that determine whether capability converts to outcomes.
Think of it this way. A feature evaluation tells you what a car can do — horsepower, torque, fuel efficiency, safety rating. The HFS™ tells you whether the road you’re about to drive it on has been paved, whether the bridge ahead can support the weight, and whether the manufacturer will still be making parts for it in three years. And once again, the HFS™ determines whether the practitioner (the driver) is ready to take the wheel.
Both matter. They answer different questions. And right now, the procurement technology industry has an abundance of answers to the first question and almost no answers to the second.
That’s the gap the Hansen Fit Score™ fills. Not better. Different. And as the 80% implementation failure rate demonstrates — urgently necessary.
And No, AI Can’t Generate Its Way Around This
There’s one more layer to this that’s worth addressing — because I’m increasingly seeing it in practice.
Some procurement teams, recognizing that feature lists aren’t enough, are turning to generative AI to build their own readiness frameworks. On the surface, this looks smart. You prompt ChatGPT with your vendor shortlist and ask it to generate readiness gates, risk matrices, and scoring thresholds. Out comes a professional-looking matrix with percentage gates: “If readiness < 65%, sequence P2P stabilization first. If readiness > 75% and asset governance priority high, consider full deployment.”
It looks like methodology. It has the structure of methodology. But those percentage gates — 65%, 75% — where do they come from? What data set produced those specific thresholds? What longitudinal implementation evidence calibrated those numbers?
The answer is: none. The AI generated plausible-sounding thresholds because the prompt asked for quantitative gates. They’re precise without being accurate — which is arguably more dangerous than having no thresholds at all, because they create false confidence.
This is the same problem in a different wrapper. Feature-comparison articles evaluate capability without measuring implementation risk. Analyst quadrants position vendors without verifying outcomes. And now AI-generated frameworks produce readiness gates without any empirical foundation behind the numbers.
The Hansen Method™ readiness thresholds aren’t generated — they’re derived from patterns observed across 42 years of implementations, including the RAM system’s 97.3% delivery accuracy over seven consecutive years with the Department of National Defence. When we set a threshold, it’s because we’ve watched that threshold hold or break across real deployments with real organizations.
And here’s the part that surprises people: the Hansen Method™ does use AI. Extensively. The difference is how.
RAM 2025™ — the analytical engine behind the Hansen Fit Score™ — doesn’t rely on a single AI model generating a framework from its training data. It runs assessments across 5 to 12 models simultaneously, using advanced algorithms to cross-validate findings, identify consensus, and flag divergence. When multiple models independently converge on the same risk signal, that signal carries weight. When they diverge, the divergence itself becomes diagnostic — it tells you where the evidence is ambiguous and where deeper investigation is required.
But the models alone aren’t what produces accurate results. What produces accurate results is what the models are validated against: Procurement Insights’ proprietary archives spanning 2007 to 2025 — nearly two decades of independently sourced implementation evidence, case studies, vendor tracking, and outcome verification that no AI model has in its training data. The archives are the empirical foundation. The multi-model architecture is the validation mechanism. Together, they produce assessments that are both algorithmically rigorous and longitudinally grounded.
A single AI model generating a readiness matrix from its training data is guessing intelligently. Twelve models cross-referencing each other against 18 years of proprietary evidence is something else entirely.
That’s the difference between a framework that was generated in seconds and one that was built over decades. Both produce numbers. Only one of them means anything.
The Hansen Fit Score™ Full Assessment for Market Dojo is the first in a series applying longitudinal evidence overlays to the entproc.com “11 Best Source to Contract Software for Mid-Market (2026)” list. The preliminary Cross-Series Risk Map covering all 11 vendors is available through the Hansen Models™ library.
For commissioned assessments, the Hansen Fit Score™ Annual Intelligence Subscription, or to discuss how the methodology applies to your specific vendor evaluation, .
It Doesn’t Measure What You Think It Measures: What A Market Dojo Co-Founder’s Comment Reveals About How We Evaluate Procurement Technology
Posted on February 20, 2026
0
SHORT VERSION FOR BUSY EXECUTIVES
Earlier this week, James Meads published his “11 Best Source to Contract Software for Mid-Market (2026)” article on entproc.com. It’s a useful list. I said so publicly in the discussion thread and I mean it — feature comparisons have a place in the evaluation process.
What happened next in that thread, though, is why I’m writing this post.
After I shared a preliminary Hansen Fit Score™ assessment of the 11 vendors — including a full longitudinal overlay for Market Dojo — Nick Drewe, co-founder of Market Dojo, weighed in with this observation:
Nick is someone I’ve respected since December 2013 when Procurement Insights first began covering Market Dojo. He’s an engineer, a co-founder who has stayed in place for 15 years, and someone who understands procurement technology from the inside out.
He’s also wrong about what the Hansen Fit Score™ measures. And that misunderstanding — which is not his alone — is exactly the problem the HFS™ was built to solve.
The Assumption Everyone Makes
Nick assumed the HFS™ evaluates the same things that every other scoring framework evaluates: pricing, features, implementation speed, usability. The things you find in a Gartner Magic Quadrant. A Forrester Wave. A Spend Matters SolutionMap. An IDC MarketScape. Or, for that matter, in James Meads’ article.
That assumption is natural because those are the only kinds of assessments the procurement technology industry has ever produced. Every framework currently available to buyers measures some variation of what a platform can do.
The Hansen Fit Score™ doesn’t measure what a platform can do.
It measures whether it will actually work.
Different Question, Different Dimensions
Here are the five dimensions of the HFS™ scores. Notice what’s absent from traditional measurement frameworks:
Ownership Stability — Who owns the vendor? How has ownership changed? Is the company subject to PE exit timelines, acquisition integration pressure, or strategic redirection by a parent entity?
Outcome Evidence — Not vendor-reported ROI claims. Independently verified implementation outcomes with named organizations, measurable results, and third-party corroboration.
Leadership Continuity — Are the people who built the platform still running it? Has the executive team turned over? Founder-led companies with stable leadership produce measurably different implementation outcomes than companies on their third CEO in five years.
Geographic Concentration — Where does the vendor actually operate versus where they claim to operate? A platform with 90% of its customer base in one region presents different implementation risk for a buyer outside that region than its marketing materials suggest.
Integration/Platform Risk — Is the vendor mid-acquisition? Mid-integration? Being absorbed into a parent platform? Is the product you’re evaluating today the same product you’ll be running in 18 months?
You’ll notice there’s no “features” dimension. No “UI/UX” score. No “pricing competitiveness” rating. Those things matter — but they’re already measured by everyone else. What nobody else measures are the structural conditions that determine whether a technically capable platform actually delivers outcomes in a specific organizational environment.
That’s the gap. That’s what the 80% implementation failure rate lives inside.
The Market Dojo Proof Point
To demonstrate this, I took James Meads’ 11-vendor list and ran every vendor through a preliminary HFS™ assessment using only independently verifiable evidence — no vendor self-reporting, no analyst briefings, no marketing materials.
The first finding was the most important: 45% of the vendors on the list couldn’t be scored at all. Five of eleven have insufficient independently verified outcome evidence and limited or no longitudinal coverage from established procurement analysts to support a full HFS™ assessment. Under the HFS™ methodology, that’s not a neutral gap. It’s a potential elevated risk signal.
A buyer relying solely on Meads’ article would have no way to distinguish between vendors with deep evidence trails and vendors with none. The article treats all eleven as comparable options differentiated by features. The HFS™ reveals that nearly half of them can’t even clear the evidentiary threshold for a preliminary assessment.
But the more powerful demonstration comes from Market Dojo — because it’s one of the vendors on the list for which I could conduct a full HFS™ assessment.
What 12 Years of Independent Evidence Reveals
Procurement Insights has covered Market Dojo independently since December 2013. That coverage includes:
Here’s why that matters for the current assessment: the July 2025 HFS (3D) evaluates three capability dimensions. The current February 2026 assessment is an **HFS (5D)** — a more stringent framework that adds two structural risk dimensions: Ownership Stability and Integration/Platform Risk. Those are precisely the dimensions where PE ownership, acquisition integration pressure, and leadership turnover live. The 5D is harder to score well on because it measures conditions that pure capability assessments don’t capture.
Market Dojo’s HFS (5D) Composite of 5.6 on an 8-point scale reflects that added rigour. But the Capability-to-Outcome Gap of 2.2 — where lower is better — is the inverted equivalent of the July 2025 HFS (3D) score of 8–9, where higher is better. Two independent assessments, seven months apart, using different measurement frameworks with different scales, arriving at the same conclusion about Market Dojo’s underlying strength. That’s not coincidence. That’s what methodological consistency looks like when the underlying evidence base is stable.
None of this information appears in James Meads’ article. None of it would surface in a feature comparison, a Magic Quadrant, or a SolutionMap. It exists because someone tracked the company longitudinally, across multiple assessment formats, for over a decade.
The Visual Story
The graphic below shows what that evidence does to the assessment.
The ghosted blue dot is Market Dojo’s position based solely on what James Meads’ article provides — sitting in the Systemic Implementation Risk zone at HFS 4.8, Gap 3.5.
The green dot is where 12 years of longitudinal evidence places them — crossing into the Manageable Risk zone at HFS 5.6, Gap 2.2.
The arrow between them is the value gap between a feature list and an evidence-based assessment. That delta — HFS +0.8, Gap narrowing by 1.3 points — is entirely attributable to evidence that no feature comparison, analyst quadrant, or solution map would surface.
Same vendor. Completely different risk picture.
And Here’s Where It Gets Honest
Even with 12 years of positive evidence, the full HFS™ assessment includes a PE FLAG — because the structural landscape has shifted in ways that matter.
Esker, Market Dojo’s parent company, was delisted from Euronext on March 3, 2025, following a €1.62 billion acquisition by Bridgepoint and General Atlantic. Esker is now PE-owned. Market Dojo’s four-year standalone operating period — negotiated as part of the 2022 acquisition — expires approximately Q1 2026. Full absorption is imminent.
This means a buyer choosing Market Dojo today isn’t buying from the bootstrapped, founder-led company that my CPO Arena panel evaluated in 2020. They’re buying from a subsidiary of a PE-owned entity navigating new ownership dynamics, with the product roadmap increasingly driven by Esker’s enterprise suite strategy rather than Market Dojo’s self-service DNA.
Does that make Market Dojo a bad choice? No. The evidence base remains strong. The founders are still in place. The platform has delivered verified outcomes.
But it means the risk profile has changed — and a buyer needs to understand what they’re actually buying into. No feature comparison will tell you that. No Magic Quadrant will flag it. The HFS™ does, because ownership stability is one of the five things it actually measures.
What Nick Got Right
Here’s the thing — Nick’s comment wasn’t entirely off base. He said the assessments seem to overlook “the customer service aspect” and “the people behind it.”
He’s right that people matter. In fact, Leadership Continuity is one of the five HFS™ dimensions — and Market Dojo scores well precisely because all three co-founders (Alun Rafique, Nick Drewe, and Nicholas Martin) are still in their roles after 15 years. That’s exceptional in procurement technology. It’s one of the reasons the full assessment moves them into Manageable Risk territory.
The chart below puts that in context.
Every line on this chart tells the same story — leadership tenure across the industry is falling. S&P 500 CEO tenure has dropped from 9.5 years in 2000 to 7.1 today. Functional C-suite roles (CMO, CPO, CTO) have fallen below 4 years. And PE-owned ProcureTech CEOs have collapsed to an average of 1.8 years — deep inside what Prosci’s change management research identifies as the Sponsorship Impossible Zone, where leadership doesn’t stay long enough to see an implementation through to outcome.
And then there’s the green line.
Market Dojo’s three co-founders have been in place since 2010. Through bootstrapping. Through the Esker acquisition in 2022. Through Esker’s own delisting and PE takeover in 2025. Through the approaching expiration of their four-year standalone operating period. Every structural transition that typically triggers leadership turnover in procurement technology — and all three founders are still there.
That’s not a testimonial. It’s a scored, longitudinal data point. Leadership stability leads to vision stability. Vision stability leads to execution capability. And execution capability is what converts platform features into implementation outcomes. Nick was right that people matter. The HFS™ just measures it instead of assuming it. Of course, the same stability on the practitioner side of the table is equally important for initiative success, which is addressed in The HFS™ practitioner score matrix.
So Nick intuitively identified a dimension the HFS™ already measures. The difference is that the HFS™ doesn’t treat “good people” as a subjective impression — it tracks leadership stability as a scored, longitudinal data point with predictive value for implementation outcomes.
But the industry-wide view only tells part of the story. To see the pattern at work inside procurement technology, specifically, you have to compare Market Dojo against the vendors buyers actually know!
This chart tracks defining leader tenure across three S2C vendors — SAP Ariba, Coupa, and Market Dojo — and the pattern is unmistakable.
SAP Ariba: Keith Krach co-founded Ariba in 1996, took it public in 1999, and departed in 2001 after just five years. What followed was three CEOs in six months. Robert Calderoni stabilized the company and built a genuine 10-year run — then SAP acquired Ariba for $4.3 billion in 2012, and the role collapsed into rotating General Managers with no founder connection to the original vision. Ariba has operated inside the Sponsorship Impossible Zone ever since.
Coupa: This is the most dramatic line on the chart. Rob Bernshteyn wasn’t a founder — Dave Stephens and Noah Eisner founded Coupa in 2006 and stepped aside by 2009. But Bernshteyn became the defining leader, building a 14-year tenure that included an IPO, category creation, and a genuine culture. Then Thoma Bravo completed its $8 billion acquisition in February 2023. Three months later, Bernshteyn was gone. The cliff is vertical. Leagh Turner, an external hire, was named CEO in November 2023. That line crashes from 14 years to zero and is now rebuilding from scratch.
Market Dojo: The green line just climbs. Sixteen years. Through bootstrapping, through Esker’s majority acquisition, through Esker’s own delisting and PE takeover, through the approaching expiration of the standalone operating period. Every inflection point that broke leadership continuity at Ariba and Coupa — Market Dojo’s founders absorbed and stayed.
The first chart shows Market Dojo’s stability against JAGGAER and the broader practitioner landscape. This one puts it next to two of the biggest names in procurement technology and asks a simple question: when ownership changes, does the leadership that built the vision survive? At Ariba, it lasted five years. At Coupa, fourteen. At Market Dojo — so far — sixteen and counting.
That’s not sentiment. It’s data. And it’s the kind of data that no feature comparison, analyst quadrant, or AI-generated readiness matrix will ever surface.
The Bigger Point
The Hansen Fit Score™ is not a better version of the Magic Quadrant. It’s not competing with Forrester Waves or SolutionMaps. Those frameworks measure capability — and they do it well within their methodological constraints.
The HFS™ measures something those frameworks were never designed to measure: the structural conditions that determine whether capability converts to outcomes.
Think of it this way. A feature evaluation tells you what a car can do — horsepower, torque, fuel efficiency, safety rating. The HFS™ tells you whether the road you’re about to drive it on has been paved, whether the bridge ahead can support the weight, and whether the manufacturer will still be making parts for it in three years. And once again, the HFS™ determines whether the practitioner (the driver) is ready to take the wheel.
Both matter. They answer different questions. And right now, the procurement technology industry has an abundance of answers to the first question and almost no answers to the second.
That’s the gap the Hansen Fit Score™ fills. Not better. Different. And as the 80% implementation failure rate demonstrates — urgently necessary.
And No, AI Can’t Generate Its Way Around This
There’s one more layer to this that’s worth addressing — because I’m increasingly seeing it in practice.
Some procurement teams, recognizing that feature lists aren’t enough, are turning to generative AI to build their own readiness frameworks. On the surface, this looks smart. You prompt ChatGPT with your vendor shortlist and ask it to generate readiness gates, risk matrices, and scoring thresholds. Out comes a professional-looking matrix with percentage gates: “If readiness < 65%, sequence P2P stabilization first. If readiness > 75% and asset governance priority high, consider full deployment.”
It looks like methodology. It has the structure of methodology. But those percentage gates — 65%, 75% — where do they come from? What data set produced those specific thresholds? What longitudinal implementation evidence calibrated those numbers?
The answer is: none. The AI generated plausible-sounding thresholds because the prompt asked for quantitative gates. They’re precise without being accurate — which is arguably more dangerous than having no thresholds at all, because they create false confidence.
This is the same problem in a different wrapper. Feature-comparison articles evaluate capability without measuring implementation risk. Analyst quadrants position vendors without verifying outcomes. And now AI-generated frameworks produce readiness gates without any empirical foundation behind the numbers.
The Hansen Method™ readiness thresholds aren’t generated — they’re derived from patterns observed across 42 years of implementations, including the RAM system’s 97.3% delivery accuracy over seven consecutive years with the Department of National Defence. When we set a threshold, it’s because we’ve watched that threshold hold or break across real deployments with real organizations.
And here’s the part that surprises people: the Hansen Method™ does use AI. Extensively. The difference is how.
RAM 2025™ — the analytical engine behind the Hansen Fit Score™ — doesn’t rely on a single AI model generating a framework from its training data. It runs assessments across 5 to 12 models simultaneously, using advanced algorithms to cross-validate findings, identify consensus, and flag divergence. When multiple models independently converge on the same risk signal, that signal carries weight. When they diverge, the divergence itself becomes diagnostic — it tells you where the evidence is ambiguous and where deeper investigation is required.
But the models alone aren’t what produces accurate results. What produces accurate results is what the models are validated against: Procurement Insights’ proprietary archives spanning 2007 to 2025 — nearly two decades of independently sourced implementation evidence, case studies, vendor tracking, and outcome verification that no AI model has in its training data. The archives are the empirical foundation. The multi-model architecture is the validation mechanism. Together, they produce assessments that are both algorithmically rigorous and longitudinally grounded.
A single AI model generating a readiness matrix from its training data is guessing intelligently. Twelve models cross-referencing each other against 18 years of proprietary evidence is something else entirely.
That’s the difference between a framework that was generated in seconds and one that was built over decades. Both produce numbers. Only one of them means anything.
The Hansen Fit Score™ Full Assessment for Market Dojo is the first in a series applying longitudinal evidence overlays to the entproc.com “11 Best Source to Contract Software for Mid-Market (2026)” list. The preliminary Cross-Series Risk Map covering all 11 vendors is available through the Hansen Models™ library.
For commissioned assessments, the Hansen Fit Score™ Annual Intelligence Subscription, or to discuss how the methodology applies to your specific vendor evaluation, .
-30-
Share this:
Related