ISM Is Right — But Real Readiness Doesn’t Start Where They Think It Does

Posted on February 26, 2026

0


What a 1998 government-funded procurement platform tells us about AI readiness in 2026.

February 26, 2026 — Procurement Insights

Today, the Institute for Supply Management published an article stating what many supply chain leaders already feel: “The challenge is not access to digital tools. It is workforce readiness.”

They’re right. And they’re still one layer short of where the problem actually starts. Not because the analysis is wrong — but because it begins after the most consequential decisions have already been made.

ISM’s argument is that organizations have invested heavily in AI-enabled analytics, automation platforms, and integrated systems — but adoption stalls because workforce skills haven’t kept pace. The solution, they argue, is to close the digital capability gap: upskill teams, build digital fluency, embed technology into performance expectations.

This is correct as far as it goes. But it accepts the technology as the given and asks humans to catch up. The tools have been selected. The platforms have been deployed. Now the workforce needs to learn how to use them.

That’s readiness defined by what technology dictates.

Real readiness — the kind that actually moves outcomes — starts with humans dictating technology. Not “can your team use this platform?” but “did your organization determine what it actually needs before selecting the platform in the first place?”

What Real Readiness Looked Like — in 1998

In the late 1990s, I was brought in to work on the Department of National Defence’s MRO procurement platform supporting Canada’s IT infrastructure. The contract required 90% next-day delivery. The incumbent was delivering 51%.

Their request was simple: “Automate our system.”

I said: “Hold on. What time of the day do orders come in?”

They looked at me like I was asking the wrong question. But the answer — most orders arrived at 4:00 PM — unlocked the entire failure chain.

The service department had technicians incentivized to complete as many service calls per day as possible. Policy required ordering parts after each call. But because the system was cumbersome, technicians would hold all their orders until end of day — a behavior known as sandbagging. They hit their service call targets. But the parts orders arrived late.

Late orders triggered two problems simultaneously. First, most parts were sourced from US-based small and medium suppliers and had to clear customs — a process that broke down when shipments arrived after hours. Second, the parts being ordered were what my research identified as “dynamic flux” commodities — products whose price could move from $100 at 9:00 AM to $1,000 by 4:00 PM.

The technicians didn’t see the connection between their sandbagging behavior, the delivery failures, and the cost escalation. Neither did the system. The strands looked unrelated — service call performance, order timing, customs clearance, commodity pricing. But they shared attributes that collectively drove the outcome.

That insight came from a theory I developed called Strand Commonality — funded by the Government of Canada’s Scientific Research and Experimental Development (SR&ED) program — which held that seemingly disparate strands of data actually have related attributes that collectively determine results. You cannot see the outcome by looking at any single strand. You have to see the connections between them.

The System That Followed the Diagnosis

We didn’t automate what was broken. We built a system around how the agents in the process — technicians, buyers, suppliers, couriers, customs — actually behaved.

The platform used self-learning algorithms that weighted supplier performance across delivery, quality, geographic proximity, and current pricing. Buyers could manually adjust the weighting — if next-day delivery was critical, the system ranked by delivery performance; if price was the priority, the algorithms recalculated and reranked automatically.

We integrated directly with UPS — as soon as a purchase order was generated, the system created the waybill number and dispatched the courier automatically. We worked with Canada Customs to pre-format clearance documentation — so the supplier received the PO, the shipping label, and the properly completed customs form in a single package. They didn’t have to call a courier or navigate customs paperwork.

Within three months, delivery performance went from 51% to 97.3%.

Over seven consecutive years, cost of goods decreased 23% — in line with the dynamic flux commodity behavior the system was designed to capture. Across the three buying groups, the FTE compressed from 23 to 3. New York City’s Transit Authority then engaged us to adapt the same platform for same-day delivery across five boroughs, using time-zone polling and strategic stocking locations.

The company was sold in 2001. The technology that produced those results — agent-based modeling, self-learning algorithms, behavioral weighting, multi-agent configuration — is what the industry now calls AI, agentic AI, and machine learning.

We were doing it in 1998. The SR&ED program funded the research. The Department of National Defence was the proving ground. And the methodology that produced 97.3% delivery accuracy is the same methodology — evolved, refined, and validated across 27 years — that now produces the Hansen Fit Score™ and RAM 2025™.

Why This Matters Now

ISM is telling supply chain leaders to close the digital workforce gap. That’s important work. Teams do need digital fluency. Skills do need to keep pace with tools.

But here’s what the DND story proves: the 51% delivery rate wasn’t a technology problem or a workforce skills problem. It was a behavioral problem that no amount of automation or upskilling would have solved. If we had automated the existing system — as they originally requested — we would have automated the sandbagging, the late orders, the customs failures, and the cost escalation. Faster. More efficiently. With better dashboards showing us the failure in real time.

The first question wasn’t “what technology do you need?” or “does your workforce have the digital skills to use it?” The first question was “what time do orders come in?” — a human question about behavior that no platform, no dashboard, and no AI-enabled analytics tool would have surfaced on its own.

The same behavioral misalignment shows up today in AI pilots that optimize model accuracy while ignoring who is accountable for acting on the output.

That’s the difference between reactive readiness and diagnostic readiness. ISM is measuring the gap between the workforce and the technology. Phase 0™ measures whether the organization should be walking in that direction at all.

As Dean Smith, Director of Global Supply Chain, Happy Meal Premiums at McDonald’s, noted after a 72-hour, three-platform public stress test of the Hansen Fit Score™ methodology:

“The DND example really lands — that’s exactly the kind of concrete evidence that makes the argument tangible.”

The Video

Not long ago, I sat down to tell this story in full — the DND contract, the strand commonality theory, the SR&ED research, the system that followed, and why it matters more today than it did in 1998.

Technology has finally caught up to what we were building 28 years ago. But the technology isn’t the lesson. The diagnostic sequence is. The question is whether organizations will catch up to the readiness methodology that made it work.


Jon W. Hansen is the founder of Hansen Models™ and creator of the Hansen Method™. The SR&ED-funded research referenced in this article produced the foundational theory — Strand Commonality — that underlies the Hansen Fit Score™, Phase 0™ readiness assessment, and RAM 2025™ multimodel architecture.

-30-

Posted in: Commentary