A Framework in Action: Mapping the Hansen Governance-First Data FabricTM to a Real Engagement
By Jon W. Hansen | Procurement Insights
The Problem with Frameworks
Every consulting firm has a framework. Most of them look convincing on a slide and collapse on contact with reality.
The test of a framework isn’t whether it organizes ideas neatly. It’s whether you can point to a real engagement and show each layer operating — what it caught, what it changed, and what would have failed without it.
The Hansen Governance-First Data Fabric was built from exactly this kind of evidence. What follows is a walkthrough of a documented engagement — Canada’s Department of National Defence MRO procurement platform — mapped layer by layer against the framework. Every moment is timestamped to the original video discussion. Nothing is theoretical.
The contract required 90% next-day delivery. The incumbent was delivering 51%. They were about to lose the contract.
Within three months, delivery hit 97.3%. Over seven years, costs dropped 23%. The buying group compressed from 23 buyers to three.
Here’s how each layer of the Data Fabric operated to produce those outcomes.
Phase 0: The Readiness Layer
What happened(Video: 1:03–1:17)
SHL, the incumbent contractor managing the DND platform, came with a clear request: “Automate our system. We need to automate.”
The response: “Hold on. Hold on a sec. Let me ask a few questions.”
What this layer caught
The client had already diagnosed their own problem — incorrectly. They assumed the failure was technological. The system was too slow. Automation would fix it.
Phase 0 is the discipline of testing that assumption before acting on it. Instead of accepting the client’s diagnosis and building what they asked for, the first move was to assess whether the organization was ready to absorb the solution they were requesting — and whether the solution they were requesting was even the right one.
One question changed the entire engagement: “What time of the day do orders come in?”
They looked confused. “What?”
“Just trust me. Give me an answer.”
“Most of the orders come in at 4:00.”
That answer revealed that the problem wasn’t technology. It was behavior, incentives, and organizational structure — none of which automation would address.
The lesson
Without Phase 0, this engagement would have produced an automated system that processed sandbagged orders faster. Delivery would have marginally improved. The underlying behavioral problem would have remained invisible. The client would have paid for a technology solution to a governance problem.
Phase 0 is the layer that asks: Is the organization ready to act on what the data reveals — or will it automate its own dysfunction?
Data Sources and Systems
What happened(Video: 1:33–2:02)
Research revealed two distinct commodity characteristics that the existing system treated identically:
Historic Flatline — products with stable pricing, suitable for centrally negotiated contracts through ERP platforms. Price stays steady. Standard procurement processes work.
Dynamic Flux — products whose price changes dramatically within hours. A part costing $100 at 9:00 AM costs $1,000 by 4:00 PM. Centrally negotiated contracts become irrelevant almost immediately because the locked-in price bears no relationship to current market value.
What this layer caught
The existing data sources treated all commodities as equivalent inputs. The ERP system processed them through identical workflows. But the data’s behavioral characteristics were fundamentally different — and that difference was invisible to the technology.
This is the trust classification principle. Not all data sources carry equal weight or behave the same way. Historic flatline data is trusted — stable, predictable, suitable for automated processing. Dynamic flux data is contested — volatile, context-dependent, requiring different governance and different decision timing.
The system had the data. It didn’t have the classification that made the data meaningful.
The lesson
Raw data is not truth. It is potential meaning. Two commodity types flowing through the same pipeline, treated identically, producing radically different outcomes. The Data Sources layer requires classification by behavioral characteristics, not just schema.
Meaning and Interpretation
What happened(Video: 2:04–3:46)
With the 4:00 PM ordering pattern identified and the dynamic flux classification established, the strands started connecting:
Orders arriving at 4:00 PM meant parts had to clear US customs after business hours — causing next-day delivery failures. The suppliers were predominantly small-medium enterprises without sophisticated customs capabilities.
Simultaneously, dynamic flux products ordered at 4:00 PM carried prices ten times higher than the same products ordered at 9:00 AM.
But the critical connection — the strand commonality — went deeper. The service department technicians were incentivized to maximize service calls per day. Policy required ordering parts after each call. But because the ordering system was cumbersome, technicians would sandbag — holding all orders until end of day so they could hit their call targets first.
The result: technicians hit their call volume targets. But their call close rates were terrible because the parts they needed to complete repairs weren’t arriving on time. They didn’t see the connection.
What this layer caught
Three seemingly unrelated data points — ordering time, price volatility, and call close rates — were causally linked through agent behavior that no individual data source revealed.
This is strand commonality operating in the Meaning and Interpretation layer. The data existed in separate systems. The ERP recorded orders. The service system tracked call volumes. The finance system tracked costs. No system connected ordering time to price impact to delivery failure to service quality degradation.
The interpretation fabric connects what the data systems cannot: the meaning that emerges only when disparate strands are read together, in context, by someone asking the right questions.
The lesson
This is where data becomes knowledge and knowledge becomes judgment. No algorithm connected these strands in the late 1990s. No algorithm connects them reliably today without the interpretation layer governing what to look for and why it matters. Connected data without meaning alignment produces connected noise.
Agent Network (Human + AI)
What happened(Video: 2:48–6:58)
With the behavioral pattern identified, the engagement mapped every agent in the system — not just the technology users, but every entity whose behavior shaped outcomes:
Agent 1: Service Technicians — incentivized for call volume, sandbagging orders, unknowingly driving up costs and killing delivery performance. Their rational behavior within their incentive structure produced irrational system outcomes.
Agent 2: Buyers — needed flexibility to weight supplier rankings differently depending on urgency. When next-day delivery was critical, the system ranked by delivery performance. When price was the priority, the buyer could manually shift the weighting and the algorithms would automatically recalculate and rerank suppliers. Traceable human judgment within an algorithmic framework.
Agent 3: Small-Medium Suppliers — lacked sophisticated technology capabilities. The system had to be easy to bid, easy to respond. Supplier expansion — at a time when the industry was pushing vendor rationalization — was the strategy. More suppliers meant more competition, better pricing, better geographic coverage.
Agent 4: UPS (Courier) — integrated directly into the system. As soon as a purchase order was generated, the system hooked into UPS, generated the waybill number, pre-printed shipping documents, and automatically dispatched pickup. The supplier didn’t have to call a courier or manage logistics.
Agent 5: Canada Customs — the third barrier to next-day delivery. Computer parts could clear customs on a priority basis if the documentation was correct. The system added a third automated form — properly completed customs clearance documentation generated simultaneously with the PO and waybill. The supplier received three documents in one package: purchase order, shipping waybill, and customs clearance.
What this layer caught
Five agents, each with different capabilities, incentives, and constraints. The traditional approach would have automated the procurement transaction — the buyer-supplier exchange. The agent-based approach mapped every entity whose behavior affected the outcome and designed governance for each one.
The service technicians couldn’t be forced to change overnight. But by making the system produce results that demonstrated the value of timely ordering — parts arriving on time, call close rates improving — their behavior would gradually shift. The system was designed around agent behavior, not against it.
The lesson
Outcomes emerge from interactions, not algorithms. Five agents, three of them external to the organization, each governed within the fabric rather than ignored by it. A traditional data fabric would have connected the buyer to the supplier. This agent network connected everyone whose behavior determined whether the delivery arrived on time.
Decision and Accountability
What happened(Video: 4:54–5:25, 7:13–7:28)
The system wasn’t a black box. The buyer had visible, traceable decision authority:
The self-learning algorithms weighted supplier performance across delivery history, quality, current pricing, and geographic proximity. The system ranked suppliers automatically based on these weighted criteria.
But the buyer could override. If delivery urgency was paramount, the system ranked by performance. If price was the priority, the buyer manually adjusted the weighting, and the system recalculated in real-time. Every decision — automated or manual — was traceable.
A centralized manager — what was called the “train yard management system” — could see what each buyer was doing in real-time. The system learned from every transaction, and the learning was visible.
What this layer caught
Accountability wasn’t an afterthought bolted onto analytics. It was built into the architecture:
Who decided — the buyer, with algorithmic support. What was decided — which supplier, at what price, with what delivery expectation. Why — the weighting rationale, whether performance-driven or price-driven. With what confidence — the self-learning algorithm’s historical accuracy, improving with every transaction. With what evidence — traceable performance data across delivery, quality, and cost.
The lesson
Trustworthy decisions require traceable responsibility. The system recommended. The human decided. The rationale was visible. The manager could audit. The algorithm learned. This is what accountability looks like when it’s designed into the fabric rather than requested after something goes wrong.
The Outcome: What Happens When All Layers Operate
The numbers(Video: 7:04–7:57)
Delivery: 51% next-day → 97.3% next-day — within three months
Cost: 23% reduction in cost of goods — sustained over seven years
Supplier base: 23 companies in the buying group → 3 — within 18 months
Revenue: $3.2 million in sales with $2.1 million profit in the final year
What no single layer produced
No individual layer generated these results. Phase 0 prevented the wrong solution. Data classification revealed hidden behavioral differences. The interpretation fabric connected strands no single system could see. The agent network governed five entities across three organizations. The accountability layer made every decision traceable and every algorithm auditable.
The fabric held because every layer was present.
The Vendor Rationalization Counterexample
What happened(Video: 12:00–13:30)
After the DND success, a call came from one of the largest PC retailers in the United States. They had implemented a vendor rationalization strategy — compressing hundreds of suppliers down to 100. The logic was conventional: leverage volume purchasing for better prices and reduce administrative costs.
It worked — for about two years. Then savings started diminishing. They asked for an assessment.
The finding: they were paying 21% over market price. By compressing their supplier base to 100, they had created a closed ecosystem that became their only source of truth. They lost visibility into the broader market. The savings from reduced administration were overwhelmed by the pricing inefficiency of a restricted supply base.
Which layer was missing?
Every layer. No Phase 0 assessment of whether vendor rationalization was the right strategy for their commodity profile. No data classification distinguishing stable categories from volatile ones. No interpretation layer connecting declining savings to market isolation. No agent network analysis of how a restricted supplier base changes competitive dynamics over time. No accountability framework for tracking whether the strategy continued to deliver value after year two.
They had a technology strategy. They had no governance fabric.
The lesson
This is what a torn data fabric looks like. The data existed. The systems worked. The strategy was logically coherent. And it produced 21% overspend because no one measured the gap between capability and readiness.
From Conceptual to Instructional
The Hansen Governance-First Data Fabric isn’t a diagram to admire. It’s a diagnostic tool to apply.
Every organization considering a data fabric investment can use this framework to answer five questions before writing a single line of code:
Phase 0: Have we tested our assumptions about what the problem actually is — or are we automating our current dysfunction?
Data Sources: Have we classified our data by behavioral characteristics — or are we treating all inputs as equivalent?
Meaning and Interpretation: Are we connecting the strands between systems — or are we building faster pipelines for disconnected data?
Agent Network: Have we mapped every agent whose behavior affects outcomes — or are we designing for the buyer-supplier transaction alone?
Decision and Accountability: Is every decision traceable, auditable, and owned — or does the system produce recommendations that no one is accountable for acting on?
If any layer is missing, the fabric will tear.
The DND engagement answered all five. That’s why it produced 97.3% delivery accuracy and 23% cost savings over seven years.
The vendor rationalization engagement answered none. That’s why it produced 21% overspend within two years.
The framework predicts both outcomes.
Jon Hansen is the founder of Hansen ModelsTM and creator of the Hansen MethodTM, a procurement transformation methodology developed over 27 years. He operates Procurement Insights, an 18-year archive documenting procurement technology patterns.
How to Avoid Tearing Your Data Fabric
Posted on February 2, 2026
0
A Framework in Action: Mapping the Hansen Governance-First Data FabricTM to a Real Engagement
By Jon W. Hansen | Procurement Insights
The Problem with Frameworks
Every consulting firm has a framework. Most of them look convincing on a slide and collapse on contact with reality.
The test of a framework isn’t whether it organizes ideas neatly. It’s whether you can point to a real engagement and show each layer operating — what it caught, what it changed, and what would have failed without it.
The Hansen Governance-First Data Fabric was built from exactly this kind of evidence. What follows is a walkthrough of a documented engagement — Canada’s Department of National Defence MRO procurement platform — mapped layer by layer against the framework. Every moment is timestamped to the original video discussion. Nothing is theoretical.
The contract required 90% next-day delivery. The incumbent was delivering 51%. They were about to lose the contract.
Within three months, delivery hit 97.3%. Over seven years, costs dropped 23%. The buying group compressed from 23 buyers to three.
Here’s how each layer of the Data Fabric operated to produce those outcomes.
Phase 0: The Readiness Layer
What happened (Video: 1:03–1:17)
SHL, the incumbent contractor managing the DND platform, came with a clear request: “Automate our system. We need to automate.”
The response: “Hold on. Hold on a sec. Let me ask a few questions.”
What this layer caught
The client had already diagnosed their own problem — incorrectly. They assumed the failure was technological. The system was too slow. Automation would fix it.
Phase 0 is the discipline of testing that assumption before acting on it. Instead of accepting the client’s diagnosis and building what they asked for, the first move was to assess whether the organization was ready to absorb the solution they were requesting — and whether the solution they were requesting was even the right one.
One question changed the entire engagement: “What time of the day do orders come in?”
They looked confused. “What?”
“Just trust me. Give me an answer.”
“Most of the orders come in at 4:00.”
That answer revealed that the problem wasn’t technology. It was behavior, incentives, and organizational structure — none of which automation would address.
The lesson
Without Phase 0, this engagement would have produced an automated system that processed sandbagged orders faster. Delivery would have marginally improved. The underlying behavioral problem would have remained invisible. The client would have paid for a technology solution to a governance problem.
Phase 0 is the layer that asks: Is the organization ready to act on what the data reveals — or will it automate its own dysfunction?
Data Sources and Systems
What happened (Video: 1:33–2:02)
Research revealed two distinct commodity characteristics that the existing system treated identically:
Historic Flatline — products with stable pricing, suitable for centrally negotiated contracts through ERP platforms. Price stays steady. Standard procurement processes work.
Dynamic Flux — products whose price changes dramatically within hours. A part costing $100 at 9:00 AM costs $1,000 by 4:00 PM. Centrally negotiated contracts become irrelevant almost immediately because the locked-in price bears no relationship to current market value.
What this layer caught
The existing data sources treated all commodities as equivalent inputs. The ERP system processed them through identical workflows. But the data’s behavioral characteristics were fundamentally different — and that difference was invisible to the technology.
This is the trust classification principle. Not all data sources carry equal weight or behave the same way. Historic flatline data is trusted — stable, predictable, suitable for automated processing. Dynamic flux data is contested — volatile, context-dependent, requiring different governance and different decision timing.
The system had the data. It didn’t have the classification that made the data meaningful.
The lesson
Raw data is not truth. It is potential meaning. Two commodity types flowing through the same pipeline, treated identically, producing radically different outcomes. The Data Sources layer requires classification by behavioral characteristics, not just schema.
Meaning and Interpretation
What happened (Video: 2:04–3:46)
With the 4:00 PM ordering pattern identified and the dynamic flux classification established, the strands started connecting:
Orders arriving at 4:00 PM meant parts had to clear US customs after business hours — causing next-day delivery failures. The suppliers were predominantly small-medium enterprises without sophisticated customs capabilities.
Simultaneously, dynamic flux products ordered at 4:00 PM carried prices ten times higher than the same products ordered at 9:00 AM.
But the critical connection — the strand commonality — went deeper. The service department technicians were incentivized to maximize service calls per day. Policy required ordering parts after each call. But because the ordering system was cumbersome, technicians would sandbag — holding all orders until end of day so they could hit their call targets first.
The result: technicians hit their call volume targets. But their call close rates were terrible because the parts they needed to complete repairs weren’t arriving on time. They didn’t see the connection.
What this layer caught
Three seemingly unrelated data points — ordering time, price volatility, and call close rates — were causally linked through agent behavior that no individual data source revealed.
This is strand commonality operating in the Meaning and Interpretation layer. The data existed in separate systems. The ERP recorded orders. The service system tracked call volumes. The finance system tracked costs. No system connected ordering time to price impact to delivery failure to service quality degradation.
The interpretation fabric connects what the data systems cannot: the meaning that emerges only when disparate strands are read together, in context, by someone asking the right questions.
The lesson
This is where data becomes knowledge and knowledge becomes judgment. No algorithm connected these strands in the late 1990s. No algorithm connects them reliably today without the interpretation layer governing what to look for and why it matters. Connected data without meaning alignment produces connected noise.
Agent Network (Human + AI)
What happened (Video: 2:48–6:58)
With the behavioral pattern identified, the engagement mapped every agent in the system — not just the technology users, but every entity whose behavior shaped outcomes:
Agent 1: Service Technicians — incentivized for call volume, sandbagging orders, unknowingly driving up costs and killing delivery performance. Their rational behavior within their incentive structure produced irrational system outcomes.
Agent 2: Buyers — needed flexibility to weight supplier rankings differently depending on urgency. When next-day delivery was critical, the system ranked by delivery performance. When price was the priority, the buyer could manually shift the weighting and the algorithms would automatically recalculate and rerank suppliers. Traceable human judgment within an algorithmic framework.
Agent 3: Small-Medium Suppliers — lacked sophisticated technology capabilities. The system had to be easy to bid, easy to respond. Supplier expansion — at a time when the industry was pushing vendor rationalization — was the strategy. More suppliers meant more competition, better pricing, better geographic coverage.
Agent 4: UPS (Courier) — integrated directly into the system. As soon as a purchase order was generated, the system hooked into UPS, generated the waybill number, pre-printed shipping documents, and automatically dispatched pickup. The supplier didn’t have to call a courier or manage logistics.
Agent 5: Canada Customs — the third barrier to next-day delivery. Computer parts could clear customs on a priority basis if the documentation was correct. The system added a third automated form — properly completed customs clearance documentation generated simultaneously with the PO and waybill. The supplier received three documents in one package: purchase order, shipping waybill, and customs clearance.
What this layer caught
Five agents, each with different capabilities, incentives, and constraints. The traditional approach would have automated the procurement transaction — the buyer-supplier exchange. The agent-based approach mapped every entity whose behavior affected the outcome and designed governance for each one.
The service technicians couldn’t be forced to change overnight. But by making the system produce results that demonstrated the value of timely ordering — parts arriving on time, call close rates improving — their behavior would gradually shift. The system was designed around agent behavior, not against it.
The lesson
Outcomes emerge from interactions, not algorithms. Five agents, three of them external to the organization, each governed within the fabric rather than ignored by it. A traditional data fabric would have connected the buyer to the supplier. This agent network connected everyone whose behavior determined whether the delivery arrived on time.
Decision and Accountability
What happened (Video: 4:54–5:25, 7:13–7:28)
The system wasn’t a black box. The buyer had visible, traceable decision authority:
The self-learning algorithms weighted supplier performance across delivery history, quality, current pricing, and geographic proximity. The system ranked suppliers automatically based on these weighted criteria.
But the buyer could override. If delivery urgency was paramount, the system ranked by performance. If price was the priority, the buyer manually adjusted the weighting, and the system recalculated in real-time. Every decision — automated or manual — was traceable.
A centralized manager — what was called the “train yard management system” — could see what each buyer was doing in real-time. The system learned from every transaction, and the learning was visible.
What this layer caught
Accountability wasn’t an afterthought bolted onto analytics. It was built into the architecture:
Who decided — the buyer, with algorithmic support. What was decided — which supplier, at what price, with what delivery expectation. Why — the weighting rationale, whether performance-driven or price-driven. With what confidence — the self-learning algorithm’s historical accuracy, improving with every transaction. With what evidence — traceable performance data across delivery, quality, and cost.
The lesson
Trustworthy decisions require traceable responsibility. The system recommended. The human decided. The rationale was visible. The manager could audit. The algorithm learned. This is what accountability looks like when it’s designed into the fabric rather than requested after something goes wrong.
The Outcome: What Happens When All Layers Operate
The numbers (Video: 7:04–7:57)
What no single layer produced
No individual layer generated these results. Phase 0 prevented the wrong solution. Data classification revealed hidden behavioral differences. The interpretation fabric connected strands no single system could see. The agent network governed five entities across three organizations. The accountability layer made every decision traceable and every algorithm auditable.
The fabric held because every layer was present.
The Vendor Rationalization Counterexample
What happened (Video: 12:00–13:30)
After the DND success, a call came from one of the largest PC retailers in the United States. They had implemented a vendor rationalization strategy — compressing hundreds of suppliers down to 100. The logic was conventional: leverage volume purchasing for better prices and reduce administrative costs.
It worked — for about two years. Then savings started diminishing. They asked for an assessment.
The finding: they were paying 21% over market price. By compressing their supplier base to 100, they had created a closed ecosystem that became their only source of truth. They lost visibility into the broader market. The savings from reduced administration were overwhelmed by the pricing inefficiency of a restricted supply base.
Which layer was missing?
Every layer. No Phase 0 assessment of whether vendor rationalization was the right strategy for their commodity profile. No data classification distinguishing stable categories from volatile ones. No interpretation layer connecting declining savings to market isolation. No agent network analysis of how a restricted supplier base changes competitive dynamics over time. No accountability framework for tracking whether the strategy continued to deliver value after year two.
They had a technology strategy. They had no governance fabric.
The lesson
This is what a torn data fabric looks like. The data existed. The systems worked. The strategy was logically coherent. And it produced 21% overspend because no one measured the gap between capability and readiness.
From Conceptual to Instructional
The Hansen Governance-First Data Fabric isn’t a diagram to admire. It’s a diagnostic tool to apply.
Every organization considering a data fabric investment can use this framework to answer five questions before writing a single line of code:
Phase 0: Have we tested our assumptions about what the problem actually is — or are we automating our current dysfunction?
Data Sources: Have we classified our data by behavioral characteristics — or are we treating all inputs as equivalent?
Meaning and Interpretation: Are we connecting the strands between systems — or are we building faster pipelines for disconnected data?
Agent Network: Have we mapped every agent whose behavior affects outcomes — or are we designing for the buyer-supplier transaction alone?
Decision and Accountability: Is every decision traceable, auditable, and owned — or does the system produce recommendations that no one is accountable for acting on?
If any layer is missing, the fabric will tear.
The DND engagement answered all five. That’s why it produced 97.3% delivery accuracy and 23% cost savings over seven years.
The vendor rationalization engagement answered none. That’s why it produced 21% overspend within two years.
The framework predicts both outcomes.
Jon Hansen is the founder of Hansen ModelsTM and creator of the Hansen MethodTM, a procurement transformation methodology developed over 27 years. He operates Procurement Insights, an 18-year archive documenting procurement technology patterns.
-30-
Share this:
Related