When the Model Is the Problem: Why Strand Commonality™ and Strand Stability™ Matter More Than Ever in the Age of AI

Posted on April 16, 2026

0


This post was prompted by two converging signals. The first: a LinkedIn discussion on a McKinsey post in which Andrey Dobrovolsky offered a thoughtful defence of architecture-first AI deployment models — and raised the concept of “Protection of the Fool” as a mechanism for handling system variability. The second: Keith King’s summary of Paul Noble’s Forbes piece on ISO 25500 as a foundational data governance standard for AI-driven commerce. Both contributions are serious and worth reading. What follows is the question neither fully answers — and the 27-year proof case that explains why.


Most organizations work hard to solve the problems they can see. They invest in better systems, cleaner data, and more efficient processes — and often get measurable improvements as a result. But there is a hidden risk in that approach. If the problem itself has not been fully understood, all that effort can end up optimizing the wrong outcome. You become more efficient. You do not necessarily become more effective.

This is the distinction that most transformation frameworks miss — and that AI, deployed without addressing it, will accelerate past the point of recovery.


Strand Commonality™ and Strand Stability™

Strand Commonality™ identifies the connections across parts of a system that are usually treated as separate — data, processes, behaviors, incentives, governance. Those connections exist whether the model accounts for them or not. In most organizations, it doesn’t. Each domain operates within its own definition of success, managed by its own owner, measured against its own metrics. What no one is tracking is what happens at the intersections.

Strand Stability™ goes one step further: it confirms whether those connections actually reflect how the system behaves under real operational pressure. And that requires something most organizations skip — questioning whether the model they are working from is complete in the first place.

This is not a technical gap. It is a framing one. Most systems assume all relevant variables are already inside the model. What that assumption rules out is the possibility that the most important variable sits entirely outside it. ISO and governance frameworks can strengthen the architecture, but they still operate within the model’s current boundary. The boundary itself has to be challenged before any of them can be trusted to hold.


The Question That Explains Everything

A simple question once revealed a major gap in a system that otherwise appeared to be functioning: “What time of day do orders come in?”

The question was not in the data model. It was not in the process design. It was not in the performance metrics. And when it was first asked, the reaction was confusion.

That reaction is the signal. When a diagnostic question produces confusion instead of engagement, it usually means the question lives outside the accepted model — and that is often exactly where the real problem is.

In this case, the answer revealed that technician incentives, order timing, customs windows, price escalation, and delivery performance were all causally connected through a single behavioral pattern the system had never been designed to observe.

The service technicians were measured on two metrics: call response — how quickly they arrived at a customer site — and call closing — how quickly they resolved the issue and completed the repair. To maximize their call response numbers, technicians had developed a practice of holding parts orders until the end of the day. By batching orders at 4 PM, they could squeeze more response calls into each day and hit their front-end targets. From inside their lane, the behavior made complete sense.

What the model had never mapped was what happened downstream. Orders placed at 4 PM missed same-day customs clearance windows. Parts sourced from suppliers — many of them SMEs with limited customs documentation experience, each using their own choice of courier — arrived one to three days late. The repair couldn’t be completed. The call stayed open. And the metric that ultimately mattered — call closing — suffered directly as a result of the behavior that was protecting the metric that was measured first.

The sandbagging was not a discipline problem. It was a structural incentive problem produced by a model that had never connected the front-end metric to the back-end outcome.

I am often asked: “How did you get the technicians to stop sandbagging?” The answer is: I didn’t start with the technicians.

The answer came from the loop-back. When we traced the full chain, it became clear that the behavior intended to meet one metric was directly causing failure in the second and most important metric:

  • Parts were ordered late, often at higher prices
  • Delivery slipped by one to three days
  • Repairs missed next-day resolution targets
  • Customer dissatisfaction increased
  • Contract risk escalated

Once that connection became visible, the system corrected itself. Not because technicians were told to change. But because the incentive structure and timing logic were realigned with the actual end objective.

Delivery performance moved from 51% to 97.3% in 90 days and held there for seven years. The model hadn’t failed. It had simply been incomplete.

That is the difference between solving the problem you see and discovering whether the problem has been correctly defined.


Why No One Answers the Question

That DND question has been posed in many forms in many contexts. It remains unresolved — not because the question is unanswerable, but because answering it honestly requires acknowledging something most frameworks are not built to accommodate:

The system doesn’t know what it doesn’t know.

Most systems are designed to detect patterns within their defined boundaries. What they cannot detect is the variable that was never included in the model in the first place. Architecture-first models detect anomalies within the system. Telemetry surfaces patterns across the data the system was configured to collect. Digital twins simulate behavior within the parameters they were built around. None of these approaches can reliably surface the variable that was never part of the model’s definition — because the model doesn’t know to look for it.

This is not a failure of technology. It is a constraint of framing. And no amount of processing power, optimization, or automation resolves it, because the problem is upstream of all of those things.


What This Means for AI

In a lower-velocity environment, the consequences of an incomplete model arrive slowly. Organizations have time to discover misalignment mid-implementation and course-correct before the damage compounds.

AI removes that buffer. When AI is deployed into an environment where the model boundaries have not been validated, it does not correct the incompleteness. It scales it — faster and at scale — at a pace that outstrips the organization’s capacity to recognize and recover from what has gone wrong.

This is why Strand Commonality™ is not a theoretical framework. It is a pre-deployment discipline. The work of mapping the cross-silo connections — of asking the questions that sit outside the accepted model — has to happen before the investment is made, before the technology is deployed, and before the efficiency gains are locked into an environment that was never correctly defined.

Otherwise, we do not become more effective. We become very good at operating inside a misdefined system. And in an AI-driven world, that distinction does not just persist. It accelerates.

AI doesn’t solve the problem. It reveals whether you defined the problem correctly in the first place.


The Unresolved Question

How would agentic AI have known to ask:

“What time of day do orders come in?”

That question has been standing for years. It remains unresolved — not because the answer is hidden, but because resolving it requires acknowledging a limit that most frameworks are not yet willing to name.

Strand Commonality™ reveals the connections. Strand Stability™ validates whether those connections reflect reality. But neither is possible without first doing the one thing most transformation programs skip: challenging the boundary of the model before optimizing within it.

That gap has to be addressed — before the investment is made, before the technology is deployed, and before efficiency is locked into a system that was never correctly defined.

That is the role of Phase 0™.


Jon W. Hansen is the Founder of Hansen Models™ and Procurement Insights — 27 years, 3,300+ documents, zero vendor sponsorships. For more information on Phase 0™ Diagnostics, visit hansenprocurement.com.

-30-

Posted in: Commentary