The Emperor’s New Clothes and the Due Diligence Nobody Does

Posted on February 19, 2026

0



READ FIRST: How To Use Our Consolidated Vendor Assessments

This assessment is designed to serve as an independent due diligence reference at any stage of your vendor relationship — before selection, during implementation, or after deployment. Before selection, it documents the structural risk variables that standard analyst reports do not measure, allowing you to build risk mitigation into your evaluation process. During implementation, it provides an evidence-based diagnostic framework when adoption challenges emerge — shifting the conversation from blame to documented pattern recognition. After deployment, it serves as the independent reference that explains why outcomes diverged from expectations, protecting both the decision and the decision-maker with longitudinal evidence. This is not a one-time read. It is a reference document that becomes more valuable the further you are into your vendor relationship.

The Latest Consolidated Assessment: JAGGAER



There is a parallel that the procurement industry has already lived through once and apparently forgotten.

In 2007 and 2008, when SaaS was still fighting for legitimacy, I watched procurement leaders go before boards and try to explain why the organization should abandon large perpetual licensing fees — with their heavy upfront capital expenditure and monthly maintenance payments — in favor of a subscription model. Buy the drink instead of buying the bar.

The boards did not want to hear it.

Not because the economics were wrong. The economics were obviously right. A subscription model reduced capital risk, eliminated shelfware, and allowed organizations to scale capability with demand rather than forecast. The math was not the problem.

The problem was that buying a perpetual license felt safe. It was a capital asset on the balance sheet. It had been approved by the same process that had approved every other capital asset. Boards understood it. Auditors understood it. The vendor understood it. Everyone had a role. Everyone was covered.

SaaS threatened that cover.

If you subscribed instead of purchased, you could cancel. If you could cancel, someone had to justify renewal. If someone had to justify renewal, someone had to measure outcomes. And if someone had to measure outcomes, the emperor’s new clothes became visible.

The resistance to SaaS was never about the technology. It was about what the technology made visible: that nobody had been measuring whether the perpetual license was delivering value either.

It took nearly a decade for SaaS to become the default. Not because the model improved — the model was sound from the beginning. It took a decade because enough boards experienced enough failed perpetual-license implementations that the risk of the known exceeded the discomfort of the new.

The Same Pattern Is Happening Now

The Hansen Fit Score™ vendor assessments are meeting the same resistance.

Not because the assessments are wrong. A record number of people have clicked through to the Payhip library. They read the scores. They saw the Capability-to-Outcome Gaps. They looked at the cross-series comparisons.

Model 1 in our RAM 2025™ framework diagnosed this precisely: people are consuming the insight as risk reduction, not as a product purchase. They use it quietly. They forward it. They bookmark it. They recognize the challenges of making the knowledge official, because official knowledge requires action.

This is the emperor’s new clothes in reverse. The emperor knows he is naked. He has read the assessment. He can see the scores. He simply cannot afford to be the one who says it out loud.

What Due Diligence Actually Looks Like

Here is what the advisory industry will not tell you: due diligence on a ProcureTech vendor should not begin and end with Gartner’s Magic Quadrant and a vendor demo.

Due diligence means assembling an independent evidence base that documents risk before the contract is signed — so that whatever happens afterward, the decision was informed, documented, and defensible.

The Hansen Fit Score™ assessments are designed to function as that evidence base at three distinct stages:

Before Selection

You are evaluating vendors. The Gartner report says Leader. The Forrester report says Leader. The vendor demo was impressive. Your integrator is confident.

The Hansen Fit Score™ assessment adds the dimension nobody else measures: the gap between what the vendor’s platform can do and what it has demonstrably delivered. It documents the structural risk variables — ownership stability, executive sponsorship continuity, organizational readiness requirements — that determine whether capability translates to outcomes.

Using it before selection does not mean rejecting the vendor. It means understanding what your organization needs to have in place for the implementation to succeed. It means building readiness into the plan rather than discovering its absence after the contract is signed.

How to present this at the board level: “As part of our vendor evaluation, we commissioned an independent risk assessment to complement our advisory firm’s technology evaluation. The assessment identifies specific readiness requirements and structural risk factors. We have incorporated these findings into our implementation plan and risk mitigation framework. The assessment is attached as Appendix C of the business case.”

That is due diligence. That is insurance. Nobody gets fired for being thorough.

During Implementation

You didn’t engage The Hansen Fit Score™ and the implementation is already underway. Adoption is slower than projected. The integrator says it is a change management problem. The vendor says it is a configuration problem. Your team says it is a training problem. Everyone is pointing somewhere else.

The Hansen Fit Score™ assessment provides a diagnostic framework that identifies the Capability-to-Outcome Gap, enabling you to get your project back on track. The organizational readiness score identifies the specific dimensions where your organization fell below the minimum threshold. The structural risk factors documented in the assessment — executive sponsorship requirements, service delivery constraints, readiness alignment — explain why the symptoms are appearing.

How to present this to the steering committee: “The adoption challenges we are experiencing are consistent with the structural risk profile documented in The Hansen Fit Score™. Specifically, the assessment identified a readiness gap in [dimension]. We recommend the following targeted interventions to close this gap before proceeding to Phase 2.”

That is not blame. That is evidence-based course correction. The assessment transforms a political conversation into a diagnostic one.

After Deployment

“Across MIT, RAND, IDC, and S&P Global research, between 80% and 95% of enterprise technology pilot programs stall, are abandoned, or fail to reach production — and in every case, the primary barriers cited are organizational, not technological: unclear ownership, absent executive sponsorship, misaligned workflows, and undefined operating models.”

The implementation underperformed. The ROI case does not match reality. The board wants to understand what happened. The CPO needs to explain a $5–15 million investment that produced a 40% adoption rate.

The Hansen Fit Score™ assessment is the only independent document that can analyze the structural conditions leading to that outcome. It confirms the retrospective evidence that the failure was not random, unforeseeable, or the result of a single bad decision. It was a documented pattern with a quantified probability — a pattern that has been repeating across the industry at a 60–80% failure rate for twenty-five years. The good news: we can help you salvage the investment and get back on track because most initiative failures have little if anything to do with the technology selection.

How to present this to the board: “Our post-implementation review identified structural factors consistent with industry-wide patterns documented in independent research. The failure was not unique to our organization — it reflects a capability-to-outcome gap that affects [X]% of implementations in this vendor’s profile. We have used the independent assessment framework to develop a remediation plan and revised vendor management approach. The lessons learned are documented in the attached brief.”

That is career protection. That is the opposite of blame. That is a CPO who can demonstrate they were informed, documented, and prepared — even when the outcome did not match the plan.

The SaaS Parallel

When SaaS was fighting for board acceptance, the breakthrough was not better technology. It was the realization that the risk of doing nothing — continuing to pay perpetual licensing fees for software that sat on shelves — exceeded the discomfort of trying something new.

The Hansen Fit Score™ assessments face the same adoption curve. The question is not whether the assessments are accurate – there are decades of case studies and pattern recognition. The question is whether enough implementations will fail expensively enough that the risk of not having independent due diligence exceeds the discomfort of purchasing it.

The early Hansen Fit Score™ adopters will be the same. They will not be the CPOs who are about to select a vendor. They will be the CPOs who have already selected one, watched it underperform, and are now seeking independent evidence that explains what happened and sets them on the right path to success, regardless of their current stage, e.g., Assessment, Implementation, or Post-Deployment.

Empowering the C-Suite and Boardroom to get it right — rather than simply being right.

Where SaaS Went Off The Rails

Before SaaS existed, Virginia’s eVA program structured something better.

Virginia did not pay Ariba a perpetual licensing fee. Virginia did not pay a subscription. Virginia told Ariba: you absorb the cost of building it. If you are confident you can deliver, we will give you a percentage of the dollar throughput flowing through the platform.

That is outcome-aligned pricing. Ariba only earned revenue when the platform was actually used. Every transaction that did not flow through the system was revenue Ariba did not collect. The vendor’s economic model was structurally aligned with the client’s implementation success — because the vendor only got paid when practitioners adopted the platform and used it.

eVA became the most documented Ariba success story. Not because the technology was different. Because the incentive structure made the vendor a partner in adoption, not just a provider of capability.

Then SaaS arrived — and broke that alignment.

Under subscription pricing, the vendor gets paid whether the platform is adopted or not. The annual fee hits regardless of whether 10% or 90% of practitioners are using it. The vendor’s incentive shifted from driving adoption to closing the subscription. Revenue became decoupled from outcome.

Virginia’s model was buy-the-drink in the truest sense — Ariba only earned when Virginia drank. SaaS became buy-the-bottle — the vendor gets paid whether you open it or not.

That is why the failure rate did not improve when SaaS replaced perpetual licensing. SaaS solved the capital expenditure problem. It solved the shelfware problem. It solved the board approval problem. But it introduced a structural misalignment that no one measured: it removed the vendor’s financial incentive to ensure the implementation actually worked.

The technology delivery model changed. The failure rate stayed at 60–80%. Because the variable that determines success was never the licensing model. It was — and remains — organizational readiness.

-30-

Posted in: Commentary