Hansen Models™ | Procurement Insights | February 2026
Here’s something no analyst firm will ever tell you: we caught ourselves making a mistake.
During the latest calibration cycle of the Hansen Fit Score™ Vendor Assessment Series, two vendors produced nearly identical Capability-to-Outcome Gaps. On paper, they looked equivalent. Same zone on the chart. Same numerical distance between what the platform can do and what organizations actually experience after implementation.
And then we stopped. Because identical numbers don’t always mean identical risk.
The Question That Expanded the Methodology
One vendor’s gap reflected delivery compression under private equity ownership — growth pressure that squeezes implementation quality. Real. Documented. But the mechanism is correctable. The vendor’s own recent acquisition includes tools that are structurally adjacent to Phase 0 readiness assessment. The correction path may already exist inside the portfolio.
The other vendor’s gap reflected something fundamentally different: a multi-decade pattern in which the primary mechanism for resolving implementation failures is not remediation but litigation. Counter-suits against customers. Contractual clauses designed to limit accountability. A pattern that persists across every platform generation, every corporate phase, and every market segment in the evidence base.
Same zone on the chart. Different resolution paths.
If the Hansen Fit Score™ couldn’t distinguish between these two fundamentally different risk profiles, the methodology was failing practitioners at exactly the moment it needed to serve them most.
What Happened Next
This is where the RAM 2025™ multimodel process did exactly what it’s designed to do.
Five independent AI models had been working the evidence base. When the numerical equivalence was challenged, the models converged on a finding that gap magnitude alone is insufficient to describe the nature of implementation risk. Two vendors can sit near each other on the chart and carry fundamentally different risk profiles — because the source of the gap is different.
The result was two simultaneous refinements:
First, the gap was recalibrated to reflect the severity, durability, and nature of the pattern across every platform generation and corporate phase in the evidence base.
Second, and more importantly, the Structural Risk Designation (SRD) framework was born.
Introducing the SRD Framework
The Structural Risk Designation is a second-axis classifier. It doesn’t change the composite mathematics. It answers a different question entirely.
The composite score tells you how wide the gap is.
The SRD tells you whether the gap is likely to close.
Three classifications:
SRD-C (Correctable Trajectory): The gap is driven by delivery model strain, but plausible corrective mechanisms exist. The vendor’s business model does not inherently resist correction.
SRD-S (Structural Characteristic): The gap persists across multiple corporate phases or ownership changes. Correction would require fundamental operating model change, not incremental improvement.
SRD-E (Endemic Business Model): The resolution mechanism for implementation failure is itself adversarial — litigation, counter-claims, contractual constraints designed to prevent accountability rather than achieve remediation. Evidence spans decades and platform generations with no corrective inflection point.
The updated Cross-Series Risk Map now reflects both dimensions — composite positioning and Structural Risk Designation — for all vendors assessed to date.
Why This Matters for Practitioners
If you’re evaluating a vendor carrying an SRD-E designation, the classification tells you something the composite score alone cannot: this gap has survived every platform transition, every leadership era, and every “next generation will be different” narrative. Planning for it to close during your implementation is not supported by the evidence.
If you’re evaluating a vendor carrying an SRD-C designation, the gap is real but the mechanism is different. Delivery compression under new ownership is a correctable condition — and the correction tools may already exist inside the vendor’s portfolio.
If you’re a provider in either ecosystem, the SRD framework tells you where your advisory value lives. SRD-C vendors need Phase 0 readiness assessment to close a correctable gap. SRD-E vendors need Phase 0 plus contractual and governance protections that account for adversarial resolution patterns.
Different gap. Different preparation. Different conversation with your client.
The Self-Correcting Methodology
Here’s what I want practitioners to understand about this moment: catching your own bias and correcting it transparently is not a weakness. It’s the entire point.
The Hansen Fit Score™ is designed as a self-correcting diagnostic instrument, not a static ranking. When the evidence reveals that the methodology needs to be more precise, the methodology evolves. The SRD framework didn’t exist before this assessment cycle. It exists now because the evidence demanded it.
Gartner doesn’t publish when their methodology fails to distinguish between fundamentally different risk profiles. Forrester doesn’t document when their Wave scoring produces equivalent ratings for non-equivalent risks. We do — because exposed methodology is the foundation of credible assessment.
Exposed. Explainable. Repeatable. And now, self-correcting.
The Cross-Series Risk Map
Five vendors have now been assessed in the Hansen Fit Score™ Vendor Assessment Series: Ivalua, SAP Ariba, Coupa, Zycus, and Oracle. The updated Cross-Series Risk Map positions each vendor on both dimensions — composite score and Structural Risk Designation — for the first time. Oro Labs and ZIP are currently in the preliminary assessment stage.
The full vendor-specific assessments, including the assessment that triggered this methodology evolution, are available through Hansen Models™. Individual assessment reports and annual subscriptions providing access to the full library are available upon request.
What’s Next
The SRD framework will be applied retrospectively to all existing assessments in the series and prospectively to every future assessment. This isn’t a vendor-specific tool — it’s a methodological advancement that strengthens every Hansen Fit Score™ evaluation.
If you’re a practitioner and want to understand how the SRD framework applies to a vendor you’re currently evaluating, or if you’d like access to the full assessment that triggered this evolution, contact Hansen Models™ directly.
The distinction between gap magnitude and gap durability might be worth more than the score itself.
Previous assessments in the series: SAP Ariba | Coupa | Zycus
Jon Hansen is the founder of Hansen Models™ and creator of the Hansen Method™. The Hansen Fit Score™ Vendor Assessment Series is 100% independent — no vendor sponsorship, no analyst firm affiliation, no consulting engagement with any vendor assessed.
-30-
How Human–AI Multimodel Collaboration Eliminates Bias In RAM 2025
Posted on February 17, 2026
0
Hansen Models™ | Procurement Insights | February 2026
Here’s something no analyst firm will ever tell you: we caught ourselves making a mistake.
During the latest calibration cycle of the Hansen Fit Score™ Vendor Assessment Series, two vendors produced nearly identical Capability-to-Outcome Gaps. On paper, they looked equivalent. Same zone on the chart. Same numerical distance between what the platform can do and what organizations actually experience after implementation.
And then we stopped. Because identical numbers don’t always mean identical risk.
The Question That Expanded the Methodology
One vendor’s gap reflected delivery compression under private equity ownership — growth pressure that squeezes implementation quality. Real. Documented. But the mechanism is correctable. The vendor’s own recent acquisition includes tools that are structurally adjacent to Phase 0 readiness assessment. The correction path may already exist inside the portfolio.
The other vendor’s gap reflected something fundamentally different: a multi-decade pattern in which the primary mechanism for resolving implementation failures is not remediation but litigation. Counter-suits against customers. Contractual clauses designed to limit accountability. A pattern that persists across every platform generation, every corporate phase, and every market segment in the evidence base.
Same zone on the chart. Different resolution paths.
If the Hansen Fit Score™ couldn’t distinguish between these two fundamentally different risk profiles, the methodology was failing practitioners at exactly the moment it needed to serve them most.
What Happened Next
This is where the RAM 2025™ multimodel process did exactly what it’s designed to do.
Five independent AI models had been working the evidence base. When the numerical equivalence was challenged, the models converged on a finding that gap magnitude alone is insufficient to describe the nature of implementation risk. Two vendors can sit near each other on the chart and carry fundamentally different risk profiles — because the source of the gap is different.
The result was two simultaneous refinements:
First, the gap was recalibrated to reflect the severity, durability, and nature of the pattern across every platform generation and corporate phase in the evidence base.
Second, and more importantly, the Structural Risk Designation (SRD) framework was born.
Introducing the SRD Framework
The Structural Risk Designation is a second-axis classifier. It doesn’t change the composite mathematics. It answers a different question entirely.
The composite score tells you how wide the gap is.
The SRD tells you whether the gap is likely to close.
Three classifications:
SRD-C (Correctable Trajectory): The gap is driven by delivery model strain, but plausible corrective mechanisms exist. The vendor’s business model does not inherently resist correction.
SRD-S (Structural Characteristic): The gap persists across multiple corporate phases or ownership changes. Correction would require fundamental operating model change, not incremental improvement.
SRD-E (Endemic Business Model): The resolution mechanism for implementation failure is itself adversarial — litigation, counter-claims, contractual constraints designed to prevent accountability rather than achieve remediation. Evidence spans decades and platform generations with no corrective inflection point.
The updated Cross-Series Risk Map now reflects both dimensions — composite positioning and Structural Risk Designation — for all vendors assessed to date.
Why This Matters for Practitioners
If you’re evaluating a vendor carrying an SRD-E designation, the classification tells you something the composite score alone cannot: this gap has survived every platform transition, every leadership era, and every “next generation will be different” narrative. Planning for it to close during your implementation is not supported by the evidence.
If you’re evaluating a vendor carrying an SRD-C designation, the gap is real but the mechanism is different. Delivery compression under new ownership is a correctable condition — and the correction tools may already exist inside the vendor’s portfolio.
If you’re a provider in either ecosystem, the SRD framework tells you where your advisory value lives. SRD-C vendors need Phase 0 readiness assessment to close a correctable gap. SRD-E vendors need Phase 0 plus contractual and governance protections that account for adversarial resolution patterns.
Different gap. Different preparation. Different conversation with your client.
The Self-Correcting Methodology
Here’s what I want practitioners to understand about this moment: catching your own bias and correcting it transparently is not a weakness. It’s the entire point.
The Hansen Fit Score™ is designed as a self-correcting diagnostic instrument, not a static ranking. When the evidence reveals that the methodology needs to be more precise, the methodology evolves. The SRD framework didn’t exist before this assessment cycle. It exists now because the evidence demanded it.
Gartner doesn’t publish when their methodology fails to distinguish between fundamentally different risk profiles. Forrester doesn’t document when their Wave scoring produces equivalent ratings for non-equivalent risks. We do — because exposed methodology is the foundation of credible assessment.
Exposed. Explainable. Repeatable. And now, self-correcting.
The Cross-Series Risk Map
Five vendors have now been assessed in the Hansen Fit Score™ Vendor Assessment Series: Ivalua, SAP Ariba, Coupa, Zycus, and Oracle. The updated Cross-Series Risk Map positions each vendor on both dimensions — composite score and Structural Risk Designation — for the first time. Oro Labs and ZIP are currently in the preliminary assessment stage.
The full vendor-specific assessments, including the assessment that triggered this methodology evolution, are available through Hansen Models™. Individual assessment reports and annual subscriptions providing access to the full library are available upon request.
What’s Next
The SRD framework will be applied retrospectively to all existing assessments in the series and prospectively to every future assessment. This isn’t a vendor-specific tool — it’s a methodological advancement that strengthens every Hansen Fit Score™ evaluation.
If you’re a practitioner and want to understand how the SRD framework applies to a vendor you’re currently evaluating, or if you’d like access to the full assessment that triggered this evolution, contact Hansen Models™ directly.
The distinction between gap magnitude and gap durability might be worth more than the score itself.
Previous assessments in the series: SAP Ariba | Coupa | Zycus
Jon Hansen is the founder of Hansen Models™ and creator of the Hansen Method™. The Hansen Fit Score™ Vendor Assessment Series is 100% independent — no vendor sponsorship, no analyst firm affiliation, no consulting engagement with any vendor assessed.
-30-
Share this:
Related