Expertise Doesn’t Disappear. It Goes Underground.

Posted on March 14, 2026

0


By Jon W. Hansen | Procurement Insights


Earlier today, in the comments on a post examining what happens to enterprise procurement platforms when capital finds them, a practitioner named Brittany Thomas offered what may be the most precise single-sentence description of the implementation-failure mechanism I have encountered in 18 years of documenting this industry.

She was describing systems that force practitioners to adapt to them rather than the reverse — and what that demand actually costs:

“That’s when we see experience start to get treated as a change management problem to solve.”

I want to stay with that sentence for a moment before moving past it. Because it describes something that the implementation industry has spent decades measuring incorrectly — and the misreading is not accidental.


The Inversion Nobody Names

When a system stops trying to learn from the practitioner and starts trying to correct them, something specific happens. The expertise doesn’t leave. It doesn’t retire. It doesn’t become less relevant. It goes underground.

The practitioner who has spent fifteen years understanding why certain orders move the way they do, why certain suppliers behave the way they do at certain times of year, why certain approvals stall at certain organizational levels — that practitioner does not suddenly forget what they know because a new system has been implemented. They comply visibly and work around the system quietly.

The implementation looks successful at the go-live report. The adoption metrics are green. The change management program is declared effective.

The outcomes tell a different story a year or two later. And by then, everyone has moved on to the next implementation.

What actually happened is this: the system won the compliance argument. The practitioner stopped offering the signals that would have made it work. And the expertise that could have shaped the outcome was reclassified as resistance — a human problem to be managed rather than intelligence to be used.

That reclassification is the failure. Not the technology. Not the practitioner. The decision to stop listening before the system understood what it was being asked to do.


The Question That Changed Everything

In 1998, I was engaged by Canada’s Department of National Defence under SR&ED funding to improve procurement delivery performance. The baseline was 51%. The expectation, unstated but present, was that a technology solution would close the gap.

Before any technology decision was made, I asked a question that the system would have refused: what time of day do orders come in?

That question was not a data preference. It was a diagnostic signal. And the answer it surfaced was not in any database, any historical report, or any system log that a technology solution would have accessed.

Technicians were holding orders until the end of the day because of how performance was measured. The metric rewarded order completion, not order timing. So rational practitioners, responding to the incentive structure they actually operated within, were creating a delivery bottleneck that no one had named — because no one had asked.

Once that condition was understood and the measurement framework was adjusted, next-day delivery moved from 51% to 97.3% within three months. The improvement held for seven years.

Technology accelerated the outcome. But only after the practitioner’s reality was understood.

If the system had said “just automate” — if I had accepted that framing and moved directly to implementation — the automation would have locked the failure in at scale. The orders would still have arrived at the end of the day. The delivery performance would have remained at 51%. The system would have been fully implemented and completely ineffective.

The question was the intervention.


Caption: “The Question Was the Intervention: DND Delivery Performance 1998–2005. Technology Capability · External Partner Throughput Layer · Internal Adoption (Phase 0™ Applied) · Counterfactual (‘Just Automate’).”


The Fear Is Not New. The Governance Gap Is.

When one of the MCI/SHL procurement buyers looked at what the throughput architecture had produced and said “with what you have created, a monkey could do my job,” he was not making a joke. He was articulating something that every technology era produces and that every generation of practitioners experiences as if for the first time: the moment when a system absorbs enough complexity that the human role it was designed to support appears to shrink.

It is worth being precise about what actually happened in that moment. The buyer’s expertise had not disappeared. It had been embedded. The diagnostic work, the process redesign, the throughput architecture — these translated deep practitioner knowledge into an operating model that no longer required that knowledge to be re-applied manually at every transaction. The simplification was the product of the expertise, not its replacement. The FTE workload of twenty-three full-time employees became three, not because the work was eliminated, but because the intelligence required to do it had been structurally encoded elsewhere.

That distinction — between expertise replaced and expertise embedded — is the one that every technology transition obscures, and the one that governance frameworks exist to preserve. It was obscured in the mainframe era. It was obscured when ERP systems arrived. It was obscured again when e-procurement promised to automate sourcing. In each cycle, the fear of displacement was real, the actual outcome was more nuanced, and the difference between good outcomes and bad ones traced back to whether the human knowledge driving the system had been understood before the system was built — or ignored in favor of the faster path to deployment.

With AI the dynamic is identical in structure and categorically different in scale. The accessibility of the technology means the gap between “just automate” and a properly governed implementation is now traversable in days rather than months. An organization can deploy an AI-augmented procurement process before anyone has asked what time of day the orders come in — and the failure, when it arrives, will be faster, deeper, and harder to attribute than anything a 1998 ERP implementation could produce. The governance question is not whether AI will change procurement roles. It will. The question is whether the expertise that currently lives inside those roles will be embedded into the new architecture or lost in the transition.

The buyer at MCI/SHL was not replaced by the system. He was freed by it — because someone had asked the right questions before the system was built. That sequence is not automatic. It has never been automatic. In the AI era, the cost of skipping it is simply higher than it has ever been before.


Why This Pattern Persists

The question I am most often asked after presenting that case is some variation of: why doesn’t this happen more often? If one question produced that outcome, why isn’t asking that question standard practice?

The answer is structural rather than individual. The implementation industry is organized around deployment, not diagnosis. Consulting firms are compensated for go-lives, not for the questions that precede them. Vendors are compensated for licenses and configurations, not for the organizational understanding that determines whether the configuration will work. The measurement frameworks that govern implementation success are built around adoption metrics and project timelines — outputs that are visible and defensible — rather than practitioner outcomes, which are slower to surface and harder to attribute.

In that environment, the question about what time of day orders come in looks like delay. It looks like scope creep. It looks like the kind of thing an experienced change manager should handle — which is to say, it gets reclassified as resistance and managed accordingly.

The expertise goes underground not because practitioners give up. It goes underground because the system has been designed to treat it as an obstacle.


What Phase 0™ Exists To Do

The diagnostic work that preceded the DND outcome is not a methodology I invented in 1998 and retired. It is the foundation of every framework that has followed — the Hansen Method™, the Hansen Fit Score™, and most directly, Phase 0™.

Phase 0™ is the formal answer to the question Brittany Thomas’s comment asks implicitly: what would it look like if the practitioner’s reality were understood before the technology decision was made, rather than after?

It is not a change management program. It is not a vendor evaluation. It is the structured process of surfacing what a system cannot see — the behavioral patterns, the measurement incentives, the organizational conditions that determine whether any technology, regardless of its capability score, will produce outcomes or produce compliance.

The difference between those two things is the difference between 51% and 97.3%.

It is also the difference between a practitioner whose expertise shapes the outcome and a practitioner whose expertise goes underground because the system stopped listening before it understood what it was asking to replace.


The Larger Question

Brittany Thomas’s comment arrived on a post examining whether ORO Labs, which was founded explicitly to fix the systems-of-record problem at SAP Ariba, will preserve that founding intention as $160 million in new capital and two new board members arrive with return expectations attached.

The question her comment raises is connected to that arc — and to the Coupa arc that precedes it.

The systems that produce cookie-cutter implementations are not built by people who don’t care about practitioners. They are built by people who did care, whose platforms reflected that care in their early years, and whose capital structures eventually made a different set of priorities more urgent.

The expertise suppression Brittany describes is not a design intention. It is an emergent property of systems that have been optimized for scale, for margin, for adoption metrics — and have gradually stopped asking the questions that would tell them whether the practitioner’s reality is being understood.

The question about what time of day orders come in is always available. The system has to be willing to hear the answer.


This post continues a series examining the structural forces shaping ProcureTech implementation outcomes. The previous post — “Noble Intentions, Capital Realities: The Oracle-Coupa and SAP Ariba-ORO Story” — is available at procureinsights.com. The Hansen Fit Score™ and Phase 0™ organizational readiness diagnostic are proprietary frameworks of Hansen Models™. https://hansenprocurement.com/

Jon W. Hansen is the founder of Hansen Models™ and creator of the Hansen Fit Score™, the Hansen Method™, and the RAM 2025™ multimodel validation framework. Procurement Insights has been publishing independently since 2007 — no vendor sponsorships, no referral arrangements.

-30-

Posted in: Commentary