By Jon Hansen | Procurement Insights | January 2026
To the C-Suite and Boardroom
In November 2025, a peer-reviewed paper was published in Logistics journal: “Agent-Based Simulation Modeling of Multimodal Transport Flows in Transportation System of Kazakhstan” (Khussanov et al.).
The researchers used agent-based modeling to identify how disparate factors — infrastructure bottlenecks, stochastic delays, uneven cargo flows, border utilization — converge to produce systemic failures in Kazakhstan’s Middle Corridor logistics network.
I built the same thing in 1998.
Not as a simulation. As a production system for the Department of National Defence that moved real parts to real technicians — and achieved 97.3% next-day delivery accuracy.
The Kazakhstan paper isn’t just academic validation of agent-based modeling. It’s proof that the Strand Commonality theory I developed 27 years ago works — and that your organization’s transformation failures aren’t random. They’re the predictable result of convergent patterns that can be observed, measured, and interrupted.
If you’re approving AI initiatives, digital transformations, or technology investments in 2026, this matters.
The Birth of Strand Commonality
In 1998, I wasn’t trying to build a theory. I was trying to fix a broken contract.
SHL (part of MCI) was managing DND’s MRO procurement platform supporting their IT infrastructure. The contract required 90% next-day delivery. They were achieving 51%. They came to me and said: “Automate our system.”
I didn’t automate. I asked questions.
The first question: “What time of day do orders come in?”
They looked at me like I was crazy. But they answered: “Most orders come in at 4:00 PM.”
That single data point unlocked everything.
What emerged from tracing the implications was Strand Commonality — the recognition that seemingly disparate data strands have related attributes that collectively drive outcomes. The government of Canada’s Scientific Research and Experimental Development (SR&ED) program funded my research into this theory.
But the theory came from practice — from tracing why a 51% delivery rate existed and what it would take to change it.
The answer wasn’t better technology. It was understanding all the agents.
The Agents I Developed
Agent 1: The Service Technicians
Service technicians were incentivized to complete as many service calls as possible each day. Policy required them to order parts after each call. But the ordering system was cumbersome, so they “sandbagged” — held all orders until end of day to hit their call volume targets first.
Result: Orders flooded in at 4:00 PM instead of throughout the day.
Agent 2: Dynamic Flux Pricing
I had identified two types of commodity characteristics: Historic Flatline (stable pricing, suitable for centrally negotiated contracts) and Dynamic Flux (volatile pricing that changes throughout the day).
IT parts were Dynamic Flux. A component that cost $100 at 9:00 AM cost $1,000 by 4:00 PM.
Result: Late orders meant higher prices.
Agent 3: Small/Medium Suppliers
Most suppliers were SMEs — not sophisticated with customs documentation or shipping logistics. They couldn’t navigate the complexity on their own.
Result: Friction at every handoff point.
Agent 4: Customs Clearance
Parts were sourced from the US and had to clear Canadian customs. Late-day orders missed the clearance window.
Result: Next-day delivery became impossible.
Agent 5: Courier Dispatch
No integration between purchase orders and courier systems. Manual coordination created delays.
Result: Even when orders were ready, logistics failed.
The Convergence
None of these factors alone caused the 51% failure rate. They converged.
- Technician incentives → 4:00 PM orders
- 4:00 PM orders → Dynamic Flux price spikes
- 4:00 PM orders → Missed customs windows
- SME suppliers → Documentation friction
- No courier integration → Dispatch delays
This is the Mayday Principle I’ve written about: A crash is never one thing. It’s a convergence of factors, each survivable alone, fatal in combination.
The 51% delivery rate wasn’t a technology problem. It was a strand convergence that no amount of automation would fix — because automation would have optimized the wrong layer.
The Solution: Agent-Based, Not Equation-Based
In 1998, the default approach was equation-based modeling — static variables, predictive averages, optimized processes. Build the system, automate the workflow, assume agents follow rules.
I chose agent-based reasoning instead.
The system I built:
- Self-learning algorithms that weighted supplier performance across delivery, quality, price, and geographic location — continuously recalculating based on actual outcomes
- Buyer flexibility — if speed was critical, the system ranked by delivery performance; if cost was critical, buyers could manually adjust weightings and the algorithms would automatically re-rank suppliers
- Integrated courier dispatch — built a bridge to UPS so purchase orders automatically generated waybills and dispatched pickups
- Pre-formatted customs documentation — the system generated properly completed customs clearance forms, eliminating SME friction
- Expanded supplier engagement — at a time when everyone preached vendor rationalization, we expanded the pool and let the algorithms surface the best performers
The results:
- 51% → 97.3% next-day delivery accuracy
- 23% cost reduction sustained over seven years
- Collective buying group compressed from 23 to 3 buyers total.
- System scaled to New York City Transit Authority with time-zone polling across five boroughs
The Inverse Proof: The 21% Premium Penalty
Shortly after the DND success, a major PC retailer called. They had executed a vendor rationalization strategy — compressed hundreds of suppliers down to 100. The logic: leverage volume purchasing, reduce administrative burden.
Two years in, they asked me to assess their competitiveness. Their internal metrics showed savings in Year 1 and Year 2, but the gains were diminishing.
My assessment: They were paying 21% over market price.
By compressing their vendor base, they had lost sight of the market. Their “source of truth” became a closed loop of 100 pre-approved suppliers. They had no antenna for what was happening in the broader market.
The savings from Year 1 dissipated as they lost visibility into competitive alternatives. Single-source contracts locked them into relationships that had no market pressure to improve.
Two approaches. Two outcomes.
The DND system worked because it treated the market as a dynamic system of agents. The vendor rationalization failed because it assumed the market could be controlled through compression.
This is Strand Commonality in action: The same theory that explains why convergent factors produce failure also explains why observing agent behavior produces success.
The Kazakhstan Paper: 27 Years of Validation
Now consider what researchers published in November 2025:
“Agent-Based Simulation Modeling of Multimodal Transport Flows in Transportation System of Kazakhstan”
The paper uses AnyLogic to simulate autonomous agents:
- Orders/cargo
- Trucks
- Trains
- Factories
- Distributors
- Border checkpoints
- Container terminals
These agents compete for resources, form queues, switch transport modes, and generate emergent behaviors — congestion at borders, delays at ports, bottlenecks that appear when multiple factors converge.
Their findings:
- Baseline shows 95% border utilization and 9.5-day export times
- 20% capacity increase cuts delays dramatically
- Unchecked growth worsens everything to 13.8 days and 98% utilization
- Bottlenecks aren’t from one factor — they converge from multiple strands
This is exactly the theory I developed in 1998. This is exactly what Strand Commonality predicts.
The Parallel That Proves the Theory
The technology exists in both cases. The constraint is the same in both cases: organizational readiness to absorb it.
The Kazakhstan researchers can model the optimal solution. But their paper explicitly identifies “infrastructure bottlenecks, uneven cargo flows, and limited digital tools” as barriers.
Those aren’t technology problems. Those are readiness problems — the same readiness problems that existed in 1998, that exist in 2025, and that will exist in 2026 unless someone measures them before deployment.
Why This Matters for 2026
Gartner’s Top 10 Strategic Technology Trends for 2026 include “Multiagent Systems” as a key capability:
“Swarms of specialized agents — one for forecasting, another for negotiation — autonomously dividing labor on multifaceted challenges like supply chain disruptions.”
I built that in 1998. It achieved 97.3% accuracy. It sustained 23% cost reduction for seven years. It scaled to New York City.
Why is it still “emerging” 27 years later?
Because the technology was never the constraint. Organizational readiness was.
- In 1998, I succeeded because I understood all the agents — human and system — and designed around their actual behaviors
- In 2025, Kazakhstan researchers are documenting the same convergent bottlenecks I identified, in a different domain, at national scale
- In 2026, Gartner is predicting multiagent systems will “boost productivity by 40%”
The prediction isn’t wrong. It’s incomplete.
Multiagent systems will boost productivity — for organizations ready to absorb them. For organizations with governance structures that can sustain them. For organizations that have assessed all the agents, not just the technological ones.
For everyone else? The same 70-80% failure rate that’s persisted for 30 years will persist into 2027 and beyond.
The Strand Commonality Proof
The Kazakhstan paper validates what I developed in 1998 and what 27 years of evidence has confirmed:
1. Disparate strands have related attributes.
In DND: technician incentives, order timing, price volatility, customs clearance, supplier capability — all related, all converging.
In Kazakhstan: border capacity, transit growth, infrastructure limits, stochastic delays — all related, all converging.
2. Attributes collectively drive outcomes.
Neither system failed because of one factor. Both systems experienced emergent behaviors from convergent strands.
3. Agent-based observation detects convergence before the crash.
Equation-based models miss it because they assume agents behave as modeled. Agent-based observation surfaces the actual behaviors — the “sandbagging,” the bottlenecks, the friction points that compound into failure.
4. The constraint is never just the technology.
The technology existed in 1998. The technology exists in 2025. The technology will exist in 2026. The constraint is organizational readiness to absorb it.
What This Means for Board Decisions
If you’re approving technology investments in 2026, here’s what the Kazakhstan paper proves:
1. Agent-based modeling works.
It’s not theoretical. It’s peer-reviewed. It surfaces convergent patterns that equation-based approaches miss.
2. The patterns I identified in 1998 are still active.
Twenty-seven years of evidence. Same strand. Same convergence dynamics. Same constraint.
3. Technology predictions without readiness assessment are incomplete.
Gartner can predict multiagent systems will transform operations. The Kazakhstan researchers can model optimal logistics flows. Neither addresses whether your organization can absorb the change.
4. Phase 0 readiness assessment applies Strand Commonality to your organization.
Before you deploy, measure:
- Governance capability — Can decisions be made and sustained?
- Collaboration maturity — Can agents (human and system) actually work together?
- Data discipline — Is information reliable enough to support the system?
- Change absorption — Can the organization sustain new behaviors?
- Pattern recognition — Are previous failure patterns still active?
If these score below threshold, no technology will succeed — regardless of how advanced it is.
Mine Is Not a Theory or Simulation
The Kazakhstan paper is a theory — a simulation, a model, a peer-reviewed exploration of what might work at national scale.
Mine is not a theory or simulation.
It’s a production system that achieved 97.3% delivery accuracy, 23% cost reduction sustained over seven years, and scaled from the Department of National Defence to New York City Transit Authority.
The question isn’t whether agent-based observation works. I proved it works in 1998.
The question is whether your organization is ready to absorb it in 2026.
That’s what Phase 0 measures. That’s what the Hansen Fit Score quantifies. That’s the difference between a theory that models success and a methodology that has delivered it.
The DND system achieved 97.3% accuracy in 1998 — not by building better algorithms, but by understanding all the agents and designing around their actual behaviors. Twenty-seven years later, peer-reviewed research validates the same approach. Phase 0 readiness assessment applies this methodology to your organization — because technology that ignores agent behavior will fail the same way in 2026 as it failed in 1998, 2008, 2015, and 2025.
Jon Hansen developed Strand Commonality Theory in 1998, funded by Canada’s SR&ED program. The agent-based procurement system he built for the Department of National Defence achieved 97.3% delivery accuracy and 23% cost reduction — results validated by 27 years of consistent evidence and now by peer-reviewed academic research. His methodology — the Hansen Fit Score — applies the same principle: observe all the agents, measure the convergent patterns, assess readiness before deployment. Because the technology has never been the constraint.
-30-
The Independent 2025 Validation
Reference: Khussanov, A.; Kaldybayeva, B.; Prokhorov, O.; Khussanov, Z.; Kenzhebekov, D.; Yevadilla, M.; Janabayev, D. “Agent-Based Simulation Modeling of Multimodal Transport Flows in Transportation System of Kazakhstan.” Logistics 2025, 9(4), 172.
Link: https://doi.org/10.3390/logistics9040172
Published: November 28, 2025 | Open Access | Editor’s Choice
Independent 2025 Validation of 1998 Strand Commonality Theory: Practical Application for 2026
Posted on January 9, 2026
0
By Jon Hansen | Procurement Insights | January 2026
To the C-Suite and Boardroom
In November 2025, a peer-reviewed paper was published in Logistics journal: “Agent-Based Simulation Modeling of Multimodal Transport Flows in Transportation System of Kazakhstan” (Khussanov et al.).
The researchers used agent-based modeling to identify how disparate factors — infrastructure bottlenecks, stochastic delays, uneven cargo flows, border utilization — converge to produce systemic failures in Kazakhstan’s Middle Corridor logistics network.
I built the same thing in 1998.
Not as a simulation. As a production system for the Department of National Defence that moved real parts to real technicians — and achieved 97.3% next-day delivery accuracy.
The Kazakhstan paper isn’t just academic validation of agent-based modeling. It’s proof that the Strand Commonality theory I developed 27 years ago works — and that your organization’s transformation failures aren’t random. They’re the predictable result of convergent patterns that can be observed, measured, and interrupted.
If you’re approving AI initiatives, digital transformations, or technology investments in 2026, this matters.
The Birth of Strand Commonality
In 1998, I wasn’t trying to build a theory. I was trying to fix a broken contract.
SHL (part of MCI) was managing DND’s MRO procurement platform supporting their IT infrastructure. The contract required 90% next-day delivery. They were achieving 51%. They came to me and said: “Automate our system.”
I didn’t automate. I asked questions.
The first question: “What time of day do orders come in?”
They looked at me like I was crazy. But they answered: “Most orders come in at 4:00 PM.”
That single data point unlocked everything.
What emerged from tracing the implications was Strand Commonality — the recognition that seemingly disparate data strands have related attributes that collectively drive outcomes. The government of Canada’s Scientific Research and Experimental Development (SR&ED) program funded my research into this theory.
But the theory came from practice — from tracing why a 51% delivery rate existed and what it would take to change it.
The answer wasn’t better technology. It was understanding all the agents.
The Agents I Developed
Agent 1: The Service Technicians
Service technicians were incentivized to complete as many service calls as possible each day. Policy required them to order parts after each call. But the ordering system was cumbersome, so they “sandbagged” — held all orders until end of day to hit their call volume targets first.
Result: Orders flooded in at 4:00 PM instead of throughout the day.
Agent 2: Dynamic Flux Pricing
I had identified two types of commodity characteristics: Historic Flatline (stable pricing, suitable for centrally negotiated contracts) and Dynamic Flux (volatile pricing that changes throughout the day).
IT parts were Dynamic Flux. A component that cost $100 at 9:00 AM cost $1,000 by 4:00 PM.
Result: Late orders meant higher prices.
Agent 3: Small/Medium Suppliers
Most suppliers were SMEs — not sophisticated with customs documentation or shipping logistics. They couldn’t navigate the complexity on their own.
Result: Friction at every handoff point.
Agent 4: Customs Clearance
Parts were sourced from the US and had to clear Canadian customs. Late-day orders missed the clearance window.
Result: Next-day delivery became impossible.
Agent 5: Courier Dispatch
No integration between purchase orders and courier systems. Manual coordination created delays.
Result: Even when orders were ready, logistics failed.
The Convergence
None of these factors alone caused the 51% failure rate. They converged.
This is the Mayday Principle I’ve written about: A crash is never one thing. It’s a convergence of factors, each survivable alone, fatal in combination.
The 51% delivery rate wasn’t a technology problem. It was a strand convergence that no amount of automation would fix — because automation would have optimized the wrong layer.
The Solution: Agent-Based, Not Equation-Based
In 1998, the default approach was equation-based modeling — static variables, predictive averages, optimized processes. Build the system, automate the workflow, assume agents follow rules.
I chose agent-based reasoning instead.
The system I built:
The results:
The Inverse Proof: The 21% Premium Penalty
Shortly after the DND success, a major PC retailer called. They had executed a vendor rationalization strategy — compressed hundreds of suppliers down to 100. The logic: leverage volume purchasing, reduce administrative burden.
Two years in, they asked me to assess their competitiveness. Their internal metrics showed savings in Year 1 and Year 2, but the gains were diminishing.
My assessment: They were paying 21% over market price.
By compressing their vendor base, they had lost sight of the market. Their “source of truth” became a closed loop of 100 pre-approved suppliers. They had no antenna for what was happening in the broader market.
The savings from Year 1 dissipated as they lost visibility into competitive alternatives. Single-source contracts locked them into relationships that had no market pressure to improve.
Two approaches. Two outcomes.
The DND system worked because it treated the market as a dynamic system of agents. The vendor rationalization failed because it assumed the market could be controlled through compression.
This is Strand Commonality in action: The same theory that explains why convergent factors produce failure also explains why observing agent behavior produces success.
The Kazakhstan Paper: 27 Years of Validation
Now consider what researchers published in November 2025:
“Agent-Based Simulation Modeling of Multimodal Transport Flows in Transportation System of Kazakhstan”
The paper uses AnyLogic to simulate autonomous agents:
These agents compete for resources, form queues, switch transport modes, and generate emergent behaviors — congestion at borders, delays at ports, bottlenecks that appear when multiple factors converge.
Their findings:
This is exactly the theory I developed in 1998. This is exactly what Strand Commonality predicts.
The Parallel That Proves the Theory
The technology exists in both cases. The constraint is the same in both cases: organizational readiness to absorb it.
The Kazakhstan researchers can model the optimal solution. But their paper explicitly identifies “infrastructure bottlenecks, uneven cargo flows, and limited digital tools” as barriers.
Those aren’t technology problems. Those are readiness problems — the same readiness problems that existed in 1998, that exist in 2025, and that will exist in 2026 unless someone measures them before deployment.
Why This Matters for 2026
Gartner’s Top 10 Strategic Technology Trends for 2026 include “Multiagent Systems” as a key capability:
I built that in 1998. It achieved 97.3% accuracy. It sustained 23% cost reduction for seven years. It scaled to New York City.
Why is it still “emerging” 27 years later?
Because the technology was never the constraint. Organizational readiness was.
The prediction isn’t wrong. It’s incomplete.
Multiagent systems will boost productivity — for organizations ready to absorb them. For organizations with governance structures that can sustain them. For organizations that have assessed all the agents, not just the technological ones.
For everyone else? The same 70-80% failure rate that’s persisted for 30 years will persist into 2027 and beyond.
The Strand Commonality Proof
The Kazakhstan paper validates what I developed in 1998 and what 27 years of evidence has confirmed:
1. Disparate strands have related attributes.
In DND: technician incentives, order timing, price volatility, customs clearance, supplier capability — all related, all converging.
In Kazakhstan: border capacity, transit growth, infrastructure limits, stochastic delays — all related, all converging.
2. Attributes collectively drive outcomes.
Neither system failed because of one factor. Both systems experienced emergent behaviors from convergent strands.
3. Agent-based observation detects convergence before the crash.
Equation-based models miss it because they assume agents behave as modeled. Agent-based observation surfaces the actual behaviors — the “sandbagging,” the bottlenecks, the friction points that compound into failure.
4. The constraint is never just the technology.
The technology existed in 1998. The technology exists in 2025. The technology will exist in 2026. The constraint is organizational readiness to absorb it.
What This Means for Board Decisions
If you’re approving technology investments in 2026, here’s what the Kazakhstan paper proves:
1. Agent-based modeling works.
It’s not theoretical. It’s peer-reviewed. It surfaces convergent patterns that equation-based approaches miss.
2. The patterns I identified in 1998 are still active.
Twenty-seven years of evidence. Same strand. Same convergence dynamics. Same constraint.
3. Technology predictions without readiness assessment are incomplete.
Gartner can predict multiagent systems will transform operations. The Kazakhstan researchers can model optimal logistics flows. Neither addresses whether your organization can absorb the change.
4. Phase 0 readiness assessment applies Strand Commonality to your organization.
Before you deploy, measure:
If these score below threshold, no technology will succeed — regardless of how advanced it is.
Mine Is Not a Theory or Simulation
The Kazakhstan paper is a theory — a simulation, a model, a peer-reviewed exploration of what might work at national scale.
Mine is not a theory or simulation.
It’s a production system that achieved 97.3% delivery accuracy, 23% cost reduction sustained over seven years, and scaled from the Department of National Defence to New York City Transit Authority.
The question isn’t whether agent-based observation works. I proved it works in 1998.
The question is whether your organization is ready to absorb it in 2026.
That’s what Phase 0 measures. That’s what the Hansen Fit Score quantifies. That’s the difference between a theory that models success and a methodology that has delivered it.
The DND system achieved 97.3% accuracy in 1998 — not by building better algorithms, but by understanding all the agents and designing around their actual behaviors. Twenty-seven years later, peer-reviewed research validates the same approach. Phase 0 readiness assessment applies this methodology to your organization — because technology that ignores agent behavior will fail the same way in 2026 as it failed in 1998, 2008, 2015, and 2025.
Jon Hansen developed Strand Commonality Theory in 1998, funded by Canada’s SR&ED program. The agent-based procurement system he built for the Department of National Defence achieved 97.3% delivery accuracy and 23% cost reduction — results validated by 27 years of consistent evidence and now by peer-reviewed academic research. His methodology — the Hansen Fit Score — applies the same principle: observe all the agents, measure the convergent patterns, assess readiness before deployment. Because the technology has never been the constraint.
-30-
The Independent 2025 Validation
Reference: Khussanov, A.; Kaldybayeva, B.; Prokhorov, O.; Khussanov, Z.; Kenzhebekov, D.; Yevadilla, M.; Janabayev, D. “Agent-Based Simulation Modeling of Multimodal Transport Flows in Transportation System of Kazakhstan.” Logistics 2025, 9(4), 172.
Link: https://doi.org/10.3390/logistics9040172
Published: November 28, 2025 | Open Access | Editor’s Choice
Share this:
Related