By Jon W. Hansen | Procurement Insights | April 23, 2026
In 1998, before a single line of RAM code was written, a manual process at Canada’s Department of National Defence moved delivery performance from 51% to 97.3% in three months, and sustained it for seven years. The theory behind that process — Hansen Strand Commonality™ — had been articulated before the work began. It was not discovered in the data. It was proven by the data.
That distinction matters more in 2026 than it did in 1998.
Atlan — one of the most credible and well-funded players in the current enterprise AI market, recognized as a Leader in the 2025 Gartner Magic Quadrant for Metadata Management Solutions and the 2026 Gartner MQ for Data & Analytics Governance — has just published a serious architectural position. Their Context Engineering Studio, Context Lakehouse, and Context Agents are positioned as the infrastructure layer for enterprise AI. Their core diagnostic is sharp: “The wall isn’t the models. It’s that no agent can reason effectively about a business it doesn’t understand.”
They are not wrong about what they are diagnosing.
They are building, at considerable technical sophistication, an answer to a question that arises from one perceptual starting point — the assumption that context can be captured and versioned, rather than continuously read. A different starting point leads to a different question.
What Atlan Is Solving For
To engage the argument seriously, it has to be stated accurately. Inconsistency across agents destroys trust in the whole deployment. That is a real problem, and Atlan has built a credible technical solution to it — through their Context Lakehouse, the same question asked of three different agents produces the same answer. Whether that harmonized answer is actually correct is a different question — one the architecture is not asked to answer.
The architectural assumption underneath the solution — the one the architecture itself does not surface — is that consistent context is equivalent to valid context. If every agent is reading from the same versioned semantic layer, the reasoning is assumed to be sound.
That assumption is the painter-and-canvas paradigm. The canvas is careful. The attribution is precise. The versioning is disciplined. And the moment the painting is complete, reality begins to move on.
What Was Proven Manually in 1998
The DND engagement began with a question that no existing procurement data model would have prompted: “What time of day do orders come in?”
That question was not a lucky guess. It was Strand Commonality™ being applied. The theory — developed before the DND work and documented in the research record — held that human agents operating in siloed functions are routinely treated as operationally unrelated, even when their attributes are causally connected at specific points of intersection. The Service department was one strand. The Procurement department was another. Both were organizationally separate, institutionally separate, and treated by the existing procurement data model as independent. Time-of-day — when Service generated its requisitions — was an attribute of the first. Delivery performance was an attribute of the second. The theory predicted their attributes would intersect. They did.
The manual process built around that intersection moved delivery performance from 51% to 97.3% within three months, and held for seven years. No automation. No code. A theory, applied by hand, to a real operating environment, producing a measurable outcome that could not be attributed to better software.
Only after the manual proof did RAM 1998 get built — to automate a method that had already been validated.
That sequence is the point. The theory was not discovered by the system. The system was built to operationalize a theory that had already been proven.
What the Diagnostic Question Actually Uncovered
The question exposed a loopback that the silo structure had hidden from the people closest to it.
Technicians in the field were measured on calls per day. That was their performance metric. The rational way to maximize it was to drop off the day’s parts requisitions at the end of the shift, in a batch, when the call count was already locked in. Sandbagging the requisitioning pattern protected the calls-per-day number. From inside the technician silo, the behavior was optimal.
Procurement was measured on a next-day 90% parts delivery SLA. When requisitions arrived in a late-day batch, the supplier lead time did not accommodate it. Procurement missed the SLA — not because of anything procurement did, but because of a pattern that had been created one silo upstream. The failure was inherited.
Executive pressure followed the failure to where it was visible. Procurement was asked to fix it. Procurement pressured the technicians. The technicians did not change. Why would they? Their metric was not harmed by procurement’s SLA failure. The pressure was asymmetric — it carried no weight against the parameter space the technicians were actually being measured on.
What changed behavior was not the pressure. It was the discovery of a second attribute on the technician’s own strand. Call closing percentage — whether the technician was resolving customer problems on the first visit — was quietly suffering, because the technician could diagnose the problem but could not fix it without the part, and the part was not on the truck the next morning because yesterday’s requisition had been sandbagged into a late-day batch.
Two attributes. Same technician. Same strand. Causally connected through a one-day temporal delay. Invisible to both attributes’ owners because the silo structure had never surfaced the connection.
Once the connection was visible, behavior changed immediately. Not through change management. Not through coordination work. Not through accountability for procurement’s contract. The technicians changed because they could see their own internal contradiction. The first-order metric (calls per day) was cannibalizing the second-order metric (call closing percentage). Nobody was asking them to care about procurement’s SLA. They were being shown that their own performance was being degraded by their own behavior — through an attribute the silo structure had hidden from them.
That is Strand Commonality™ at its most surgical: not intersecting two different agents’ attributes, but surfacing two attributes on the same agent that the silo structure had kept from connecting.
The Camcorder View — Published in 2005, Republished in 2008
The articulation most directly relevant to the current Atlan moment was published in a 2005 paper and republished on Procurement Insights in April 2008 under the title “Similarity Heuristics, Iterative Methodologies and the Emergence of the Modern Supply Chain.”
The passage reads: “To effectively capture the dynamic elements within this kind of environment, a different methodology such as strand commonality (re camcorder) must be employed to ensure that an accurate picture is captured on an ongoing basis, thereby bridging or synchronizing the chasms between multiple transactional streams.”
The distinction the 2008 post was making — camcorder versus canvas — is the same distinction that separates ARA™-driven RAM 2025™ from Atlan’s Context Lakehouse eighteen years later.
A canvas is a snapshot. It can be versioned, time-traveled, and audited, but it remains a sequence of snapshots. A camcorder is continuous. The meaning of any single frame is contextualized by the frames before and after it. Reality is captured as it unfolds, not represented after the fact.
Atlan has built a very sophisticated gallery of canvases, with careful attribution of who painted each one and when. That is genuinely useful for the problem they have chosen to solve. It is not a camcorder.
The Learning Loopback Process™
The mechanism that operationalizes the camcorder view is the Learning Loopback Process™ — a four-stage architecture that has been running, in one form or another, since the 1998 DND engagement. It was formally named in published form on Procurement Insights in April 2025, in a direct comparison against SAP’s Joule feedback loops.
The four stages:
Observation. Each strand carries a living stream of data and events. Historical attributes accumulate over time — delivery performance, product quality, cumulative track record. Dynamic attributes surface in the moment — time-of-day availability, real-world pricing at point of request, geographic location relative to the point of need. Both streams feed the system continuously.
Action. Decisions are proposed based on current strand state, weighted against buyer-defined priority. Is the priority delivery performance? Cost? Geography? The weighting determines which strands dominate the decision, and the weighting can shift by context.
Assessment. Outcomes feed back into the strand record. A supplier that performed well under a delivery-performance priority may fall in the rankings when the priority shifts to cost. The assessment is not whether the decision was correct in the abstract — it is whether the decision remained aligned with the priority it was made under, and whether the priority itself still reflects operating reality.
Adaptation. The system adjusts continuously. Not the model — the model’s applicability to the current environment. When a strand’s behavior shifts, the adjustment is surfaced immediately rather than absorbed quietly into a versioned snapshot.
That is the mechanism. It is twenty-eight years old. It has been applied in production, named in publication, and tested against more than one well-funded competitor at more than one layer of the enterprise AI stack.
The Atlan-Specific Distinction
Atlan’s Context Lakehouse and ARA™-driven RAM 2025™ are not competing products. They are answering different questions.
Dimension
Atlan Context Layer
ARA™-driven RAM 2025™
Question being answered
Are all agents drawing from the same context?
Is the context itself still valid under current conditions?
Validity of assumptions against real-world operating behavior
Failure mode it prevents
Different agents producing inconsistent answers
Consistent agents producing the same wrong answer, faster
Failure mode it does not address
Standardizing outdated or incorrect logic
Coordinating inconsistent inputs across disconnected systems**
Origin of the method
Context engineering discipline, 2024–2026
Strand commonality theory, operationally proven at DND in 1998
** This is the role of Phase 0™
Both layers have a legitimate place in a working enterprise AI architecture. An organization running Atlan without a validity layer is automating the coordination of assumptions that may no longer be true. An organization running a validity layer without a coordination infrastructure is checking assumptions in isolation. The architectural error is not choosing one — it is assuming either one alone is sufficient.
Why This Is Not a New Position
This is the second time in twelve months that the Learning Loopback Process™ has been applied to a named, well-funded competitor at a different layer of the AI stack. In April 2025, it was applied to SAP’s Joule, which uses explicit feedback loops — gradient descent — to optimize procurement analytics inside S/4HANA. Joule optimizes within a pre-defined mathematical loss function. The Learning Loopback Process™ does not assume the loss function itself is correct.
In April 2026, the comparator is Atlan, which optimizes for consistency of a pre-defined semantic layer. The distinction is structurally identical. The Learning Loopback Process™ does not assume the semantic layer itself remains valid as conditions change.
The framework did not need to be retrofitted between the two comparisons. It applied to both because it is describing something more fundamental than either comparator — a way of perceiving both human and Agentic AI agents as living strands whose attributes intersect across organizational silos, rather than as represented entities in a static data model.
The Practical Implication
For procurement leaders, CIOs, CDOs, and CFOs currently evaluating context-layer infrastructure, the question to hold is simple:
Is the context your AI stack draws from being coordinated, or being continuously validated?
Coordination is a solved problem with well-resourced vendors building against it. Validation is a different problem with a smaller but longer-tested answer. The organizations that will avoid the next wave of AI write-downs will be the ones that recognize they need both — and that the validation layer is the one that has to exist first.
The canvases are worth having. But only if someone is still holding the camcorder.
Atlan helps agents work from the same map.ARA™-driven RAM 2025™ ensures the map is accurate — and keeps checking it as the terrain changes.
The mechanism that does the checking has been running since 1998.
Related Reading from the Procurement Insights Archive
Jon W. Hansen is founder of Hansen Models™ and the Procurement Insights archive — 3,300+ published documents, zero vendor sponsorships, in continuous operation since 2007. The foundational work began in 1998 with SR&ED-funded research at Canada’s Department of National Defence.
Atlan Is Building the Context Layer. The Question It Cannot Answer Was Answered Manually in 1998.
Posted on April 23, 2026
0
By Jon W. Hansen | Procurement Insights | April 23, 2026
In 1998, before a single line of RAM code was written, a manual process at Canada’s Department of National Defence moved delivery performance from 51% to 97.3% in three months, and sustained it for seven years. The theory behind that process — Hansen Strand Commonality™ — had been articulated before the work began. It was not discovered in the data. It was proven by the data.
That distinction matters more in 2026 than it did in 1998.
Atlan — one of the most credible and well-funded players in the current enterprise AI market, recognized as a Leader in the 2025 Gartner Magic Quadrant for Metadata Management Solutions and the 2026 Gartner MQ for Data & Analytics Governance — has just published a serious architectural position. Their Context Engineering Studio, Context Lakehouse, and Context Agents are positioned as the infrastructure layer for enterprise AI. Their core diagnostic is sharp: “The wall isn’t the models. It’s that no agent can reason effectively about a business it doesn’t understand.”
They are not wrong about what they are diagnosing.
They are building, at considerable technical sophistication, an answer to a question that arises from one perceptual starting point — the assumption that context can be captured and versioned, rather than continuously read. A different starting point leads to a different question.
What Atlan Is Solving For
To engage the argument seriously, it has to be stated accurately. Inconsistency across agents destroys trust in the whole deployment. That is a real problem, and Atlan has built a credible technical solution to it — through their Context Lakehouse, the same question asked of three different agents produces the same answer. Whether that harmonized answer is actually correct is a different question — one the architecture is not asked to answer.
The architectural assumption underneath the solution — the one the architecture itself does not surface — is that consistent context is equivalent to valid context. If every agent is reading from the same versioned semantic layer, the reasoning is assumed to be sound.
That assumption is the painter-and-canvas paradigm. The canvas is careful. The attribution is precise. The versioning is disciplined. And the moment the painting is complete, reality begins to move on.
What Was Proven Manually in 1998
The DND engagement began with a question that no existing procurement data model would have prompted: “What time of day do orders come in?”
That question was not a lucky guess. It was Strand Commonality™ being applied. The theory — developed before the DND work and documented in the research record — held that human agents operating in siloed functions are routinely treated as operationally unrelated, even when their attributes are causally connected at specific points of intersection. The Service department was one strand. The Procurement department was another. Both were organizationally separate, institutionally separate, and treated by the existing procurement data model as independent. Time-of-day — when Service generated its requisitions — was an attribute of the first. Delivery performance was an attribute of the second. The theory predicted their attributes would intersect. They did.
The manual process built around that intersection moved delivery performance from 51% to 97.3% within three months, and held for seven years. No automation. No code. A theory, applied by hand, to a real operating environment, producing a measurable outcome that could not be attributed to better software.
Only after the manual proof did RAM 1998 get built — to automate a method that had already been validated.
That sequence is the point. The theory was not discovered by the system. The system was built to operationalize a theory that had already been proven.
What the Diagnostic Question Actually Uncovered
The question exposed a loopback that the silo structure had hidden from the people closest to it.
Technicians in the field were measured on calls per day. That was their performance metric. The rational way to maximize it was to drop off the day’s parts requisitions at the end of the shift, in a batch, when the call count was already locked in. Sandbagging the requisitioning pattern protected the calls-per-day number. From inside the technician silo, the behavior was optimal.
Procurement was measured on a next-day 90% parts delivery SLA. When requisitions arrived in a late-day batch, the supplier lead time did not accommodate it. Procurement missed the SLA — not because of anything procurement did, but because of a pattern that had been created one silo upstream. The failure was inherited.
Executive pressure followed the failure to where it was visible. Procurement was asked to fix it. Procurement pressured the technicians. The technicians did not change. Why would they? Their metric was not harmed by procurement’s SLA failure. The pressure was asymmetric — it carried no weight against the parameter space the technicians were actually being measured on.
What changed behavior was not the pressure. It was the discovery of a second attribute on the technician’s own strand. Call closing percentage — whether the technician was resolving customer problems on the first visit — was quietly suffering, because the technician could diagnose the problem but could not fix it without the part, and the part was not on the truck the next morning because yesterday’s requisition had been sandbagged into a late-day batch.
Two attributes. Same technician. Same strand. Causally connected through a one-day temporal delay. Invisible to both attributes’ owners because the silo structure had never surfaced the connection.
Once the connection was visible, behavior changed immediately. Not through change management. Not through coordination work. Not through accountability for procurement’s contract. The technicians changed because they could see their own internal contradiction. The first-order metric (calls per day) was cannibalizing the second-order metric (call closing percentage). Nobody was asking them to care about procurement’s SLA. They were being shown that their own performance was being degraded by their own behavior — through an attribute the silo structure had hidden from them.
That is Strand Commonality™ at its most surgical: not intersecting two different agents’ attributes, but surfacing two attributes on the same agent that the silo structure had kept from connecting.
The Camcorder View — Published in 2005, Republished in 2008
The articulation most directly relevant to the current Atlan moment was published in a 2005 paper and republished on Procurement Insights in April 2008 under the title “Similarity Heuristics, Iterative Methodologies and the Emergence of the Modern Supply Chain.”
The passage reads: “To effectively capture the dynamic elements within this kind of environment, a different methodology such as strand commonality (re camcorder) must be employed to ensure that an accurate picture is captured on an ongoing basis, thereby bridging or synchronizing the chasms between multiple transactional streams.”
The distinction the 2008 post was making — camcorder versus canvas — is the same distinction that separates ARA™-driven RAM 2025™ from Atlan’s Context Lakehouse eighteen years later.
A canvas is a snapshot. It can be versioned, time-traveled, and audited, but it remains a sequence of snapshots. A camcorder is continuous. The meaning of any single frame is contextualized by the frames before and after it. Reality is captured as it unfolds, not represented after the fact.
Atlan has built a very sophisticated gallery of canvases, with careful attribution of who painted each one and when. That is genuinely useful for the problem they have chosen to solve. It is not a camcorder.
The Learning Loopback Process™
The mechanism that operationalizes the camcorder view is the Learning Loopback Process™ — a four-stage architecture that has been running, in one form or another, since the 1998 DND engagement. It was formally named in published form on Procurement Insights in April 2025, in a direct comparison against SAP’s Joule feedback loops.
The four stages:
Observation. Each strand carries a living stream of data and events. Historical attributes accumulate over time — delivery performance, product quality, cumulative track record. Dynamic attributes surface in the moment — time-of-day availability, real-world pricing at point of request, geographic location relative to the point of need. Both streams feed the system continuously.
Action. Decisions are proposed based on current strand state, weighted against buyer-defined priority. Is the priority delivery performance? Cost? Geography? The weighting determines which strands dominate the decision, and the weighting can shift by context.
Assessment. Outcomes feed back into the strand record. A supplier that performed well under a delivery-performance priority may fall in the rankings when the priority shifts to cost. The assessment is not whether the decision was correct in the abstract — it is whether the decision remained aligned with the priority it was made under, and whether the priority itself still reflects operating reality.
Adaptation. The system adjusts continuously. Not the model — the model’s applicability to the current environment. When a strand’s behavior shifts, the adjustment is surfaced immediately rather than absorbed quietly into a versioned snapshot.
That is the mechanism. It is twenty-eight years old. It has been applied in production, named in publication, and tested against more than one well-funded competitor at more than one layer of the enterprise AI stack.
The Atlan-Specific Distinction
Atlan’s Context Lakehouse and ARA™-driven RAM 2025™ are not competing products. They are answering different questions.
** This is the role of Phase 0™
Both layers have a legitimate place in a working enterprise AI architecture. An organization running Atlan without a validity layer is automating the coordination of assumptions that may no longer be true. An organization running a validity layer without a coordination infrastructure is checking assumptions in isolation. The architectural error is not choosing one — it is assuming either one alone is sufficient.
Why This Is Not a New Position
This is the second time in twelve months that the Learning Loopback Process™ has been applied to a named, well-funded competitor at a different layer of the AI stack. In April 2025, it was applied to SAP’s Joule, which uses explicit feedback loops — gradient descent — to optimize procurement analytics inside S/4HANA. Joule optimizes within a pre-defined mathematical loss function. The Learning Loopback Process™ does not assume the loss function itself is correct.
In April 2026, the comparator is Atlan, which optimizes for consistency of a pre-defined semantic layer. The distinction is structurally identical. The Learning Loopback Process™ does not assume the semantic layer itself remains valid as conditions change.
The framework did not need to be retrofitted between the two comparisons. It applied to both because it is describing something more fundamental than either comparator — a way of perceiving both human and Agentic AI agents as living strands whose attributes intersect across organizational silos, rather than as represented entities in a static data model.
The Practical Implication
For procurement leaders, CIOs, CDOs, and CFOs currently evaluating context-layer infrastructure, the question to hold is simple:
Is the context your AI stack draws from being coordinated, or being continuously validated?
Coordination is a solved problem with well-resourced vendors building against it. Validation is a different problem with a smaller but longer-tested answer. The organizations that will avoid the next wave of AI write-downs will be the ones that recognize they need both — and that the validation layer is the one that has to exist first.
The canvases are worth having. But only if someone is still holding the camcorder.
Atlan helps agents work from the same map. ARA™-driven RAM 2025™ ensures the map is accurate — and keeps checking it as the terrain changes.
The mechanism that does the checking has been running since 1998.
Related Reading from the Procurement Insights Archive
Jon W. Hansen is founder of Hansen Models™ and the Procurement Insights archive — 3,300+ published documents, zero vendor sponsorships, in continuous operation since 2007. The foundational work began in 1998 with SR&ED-funded research at Canada’s Department of National Defence.
Hansen Models™ | Phase 0™ | Hansen Fit Score™ (HFS™) | RAM 2025™ | ARA™ (Augmented Reasoning Architecture™) | Learning Loopback Process™ | Hansen Strand Commonality™ | Implementation Physics™
hansenprocurement.com | payhip.com/hansenmodels | calendly.com/jon-toq/30min
-30-
Share this:
Related