This is implementation physics at its finest: correct the model first, then scale it.
The two graphics below document something that is rarely demonstrated in procurement and supply chain research: the same diagnostic methodology producing sustained, audited results across two entirely different sectors, two different countries, two different decades, and two different types of hidden problem. The first is the 1998 Department of National Defence engagement — SR&ED-funded, government of Canada — where a single behavioral strand connecting five agents was invisible to every domain owner and absent from every metric until one diagnostic question surfaced it. Delivery performance moved from 51% to 97.3% in 90 days and held for seven years. The second is the Commonwealth of Virginia’s eVA initiative — a structural strand of agency-by-agency fragmentation that was simultaneously achieving less value for every stakeholder who believed they were protecting their own interests. Process understanding and stakeholder alignment corrected it. The result generated $338 million in savings over 24 years.
In both cases, the sequence was identical. Strand Commonality™ first — identifying the cross-silo connections the model had never been designed to observe. Strand Stability™ second — validating that the corrections held under real operational conditions across all agents. Technology third — introduced to sustain a model that reality had already confirmed, not to fix one that was still incomplete. In both cases the system self-corrected once the causal chain became visible. In both cases no agent was told to change. The incentive logic changed when the strand was mapped. And in both cases, the technology came last.
One case used RAM 1998™ — a bespoke SR&ED-funded agent-based system built specifically around the behavioral and process reality already validated at DND. The other used Ariba — a commercial off-the-shelf platform that had already failed in other deployments at the University of Washington and VF Corporation using the same software. Two entirely different technologies. The same result. Because the technology was never the variable. The sequence was.
This is not a theoretical framework. It is a documented, repeatable methodology with a 27-year independent field record. It is also the most direct answer available to the question that AI governance, ISO standards, and architecture-first deployment models have not yet resolved: how does any system — human or artificial — know to ask the question that sits outside the model it was built on? It doesn’t. Not without the diagnostic discipline that challenges the model boundary before the commitment is made. That is what these two cases demonstrate. That is what Phase 0™ is designed to provide.
Jon W. Hansen is the Founder of Hansen Models™ and Procurement Insights — 27 years, 3,300+ documents, zero vendor sponsorships. For more information on Phase 0™ Diagnostics, visit hansenprocurement.com.
Two Cases. Two Years Apart. Different Situations. One Methodology. The Same Result.
Posted on April 17, 2026
0
This is implementation physics at its finest: correct the model first, then scale it.
The two graphics below document something that is rarely demonstrated in procurement and supply chain research: the same diagnostic methodology producing sustained, audited results across two entirely different sectors, two different countries, two different decades, and two different types of hidden problem. The first is the 1998 Department of National Defence engagement — SR&ED-funded, government of Canada — where a single behavioral strand connecting five agents was invisible to every domain owner and absent from every metric until one diagnostic question surfaced it. Delivery performance moved from 51% to 97.3% in 90 days and held for seven years. The second is the Commonwealth of Virginia’s eVA initiative — a structural strand of agency-by-agency fragmentation that was simultaneously achieving less value for every stakeholder who believed they were protecting their own interests. Process understanding and stakeholder alignment corrected it. The result generated $338 million in savings over 24 years.
In both cases, the sequence was identical. Strand Commonality™ first — identifying the cross-silo connections the model had never been designed to observe. Strand Stability™ second — validating that the corrections held under real operational conditions across all agents. Technology third — introduced to sustain a model that reality had already confirmed, not to fix one that was still incomplete. In both cases the system self-corrected once the causal chain became visible. In both cases no agent was told to change. The incentive logic changed when the strand was mapped. And in both cases, the technology came last.
One case used RAM 1998™ — a bespoke SR&ED-funded agent-based system built specifically around the behavioral and process reality already validated at DND. The other used Ariba — a commercial off-the-shelf platform that had already failed in other deployments at the University of Washington and VF Corporation using the same software. Two entirely different technologies. The same result. Because the technology was never the variable. The sequence was.
This is not a theoretical framework. It is a documented, repeatable methodology with a 27-year independent field record. It is also the most direct answer available to the question that AI governance, ISO standards, and architecture-first deployment models have not yet resolved: how does any system — human or artificial — know to ask the question that sits outside the model it was built on? It doesn’t. Not without the diagnostic discipline that challenges the model boundary before the commitment is made. That is what these two cases demonstrate. That is what Phase 0™ is designed to provide.
Jon W. Hansen is the Founder of Hansen Models™ and Procurement Insights — 27 years, 3,300+ documents, zero vendor sponsorships. For more information on Phase 0™ Diagnostics, visit hansenprocurement.com.
-30-
Share this:
Related