In November 2006, I chaired a Government Accountability Roundtable hosted by the Canadian Advanced Technology Alliance — and wrote about it for Summit Magazine. The roundtable brought together senior public sector veterans alongside representatives from SAS Canada, Capgemini, CGI, and the University of Ottawa to examine why public sector procurement initiatives were failing at such a persistent and predictable rate.
A government-sponsored study had found that 75 to 85 percent of all initiatives undertaken by government failed — or, in their terminology, ended up in the “valley of death” between strategy and execution. That number was striking. But the explanation is what stayed with me.
Organizations were applying vertical, siloed approaches to horizontal challenges. The very nature of a silo, as I wrote at the time, is that its vertical walls create the barriers to collaborative understanding. Policies, processes, and systems were being designed within functional boundaries, while the actual challenges — supply chain performance, procurement outcomes, stakeholder coordination — cut across every one of those boundaries simultaneously.
That was nearly two decades ago.
The Pattern That Refused to Change
In the years that followed, I watched every major technology wave arrive with the same promise and encounter the same wall.
ERP was going to integrate everything. It didn’t — it automated the silos that already existed. eProcurement was going to transform the buying process. For most organizations it didn’t — it digitized the same misaligned workflows at higher speed. P2P was going to connect purchasing to payment seamlessly. It connected them within functions that still weren’t talking to each other across functions.
The 1998 DND engagement I documented put a specific face on this pattern. When I asked what time of day orders came in and the answer was four o’clock, I wasn’t uncovering a technology problem or a procurement problem. I was uncovering an incentive misalignment that had been invisible to everyone designing the system because nobody had asked the right diagnostic question before the commitment was made. Service technicians were sandbagging orders to meet their call volume targets. Prices were rising from roughly a hundred dollars at nine in the morning to nine hundred by the time the orders arrived. Delivery performance had collapsed to fifty-one percent against a ninety percent contractual requirement — and the people causing it were hitting their own numbers.
The fix was not technological. It was diagnostic — understanding how the organization actually behaved before recommending any change to the system it operated within.
That engagement produced the question I have been asking in every context since: does the model reflect reality, or does it reflect what we assumed reality looked like when we designed it?
Across seven consecutive technology cycles since 1998 — ERP, eProcurement, P2P, cloud suites, analytics, RPA, and now AI — the archive documents the same pattern holding at 80 percent. The technology has changed in every cycle. The failure rate has not.
What AI Changes — and What It Doesn’t
Today the conversation has moved to AI, agentic systems, data readiness, and digital transformation. The vocabulary is different. The speed is different. The scale of the investment is different.
The underlying failure pattern is not.
Organizations are still aligning within silos, defining requirements based on partial perspectives, and assuming shared understanding where it doesn’t exist. What sits beneath that gap consistently is not just process or data — it is differing definitions of success, risk, and accountability across the C-Suite. The CFO, CIO, CDO, CEO, and CPO are each evaluating the same initiative through a different functional lens, and the space between those lenses is where implementations go to die.
They are still committing to models that reflect a simplified version of how they operate — one that is clean enough to build a business case around, but not accurate enough to predict what happens when the system meets operational reality.
What AI changes is the speed at which the gap between the model and the reality becomes visible — and the scale of the damage when it does. In the eras I documented from 2006 onward, a misaligned initiative typically took two to four years to surface as a visible failure. That window gave organizations time to restructure, reframe, or quietly absorb the loss. In the AI era, the same misalignment surfaces in weeks or months — and the consequences are proportionally larger because AI learns from and scales whatever conditions it encounters. If those conditions include misaligned incentives, incomplete data assumptions, and unresolved governance gaps, the system does not simply fail. It propagates the dysfunction at machine speed.
The valley of death has not disappeared. It has accelerated.
The Question That Was Always Missing
The 2006 roundtable identified the problem correctly: vertical approaches to horizontal challenges, functional silos applied to cross-functional realities, the failure to understand the interdependent requirements of all stakeholders before committing to a system designed to serve them.
What it did not explicitly answer was when that understanding needed to happen.
That question turns out to be the only one that matters.
Either you surface the gap between how the organization thinks it operates and how it actually behaves under operational pressure before the commitment is made — or you discover it after the outcome reveals what was missed. In practice, those are still the only two options. Everything between them is the expensive fiction that change management and post-commitment governance can substitute for pre-commitment readiness assessment.
They cannot. The archive is categorical on this point. Across every technology era since 1998, the determinant of outcome was a pre-commitment condition — governance architecture, decision authority alignment, incentive structure, cross-functional ontological compatibility. In no case was the platform capability or the change management investment the primary variable. The conditions were either right before the commitment or they were not.
That is the work I formalized as Phase 0™ — a pre-commitment diagnostic that measures governance, incentives, and cross-functional reality before any major technology or sourcing decision is approved. Not after the consequences arrive. Before the commitment is made, in the only window where the outcome is still optimally changeable.
The Continuity the Market Keeps Ignoring
There is a consistent tendency in every new technology era to treat the current challenge as unprecedented — as something requiring a new framework, a new methodology, a new generation of tools. That tendency is not wrong about the technology. It is wrong about the organizational problem the technology is being asked to solve.
The organizational problem was documented in a government-sponsored study in 2006. It was documented in a DND engagement in 1998. It was documented in the Virginia eVA program across a twenty-four year period. It is being documented now in AI implementation failure rates that are indistinguishable from the eProcurement failure rates I was writing about for Summit Magazine two decades ago.
The language has changed. The technology has evolved. The speed has increased.
The gap between how organizations think they operate and how they actually behave in practice has not closed.
By the time execution begins, the outcome is largely determined.
Final Takeaway: The valley of death was never an execution problem. It was always a pre-commitment problem. The organizations that understood that in 2006 are the ones with 24-year proof cases. The organizations that are learning it for the first time in 2026 are learning it at AI speed.
Which raises a practical question — where does your organization actually sit right now?
Jon Hansen is the founder of Hansen Models™ and Procurement Insights — 27 years, 3,300+ documents, zero vendor sponsorships.
© 2026 Procurement Insights. All rights reserved.
-30-
A Note on Hansen Strand Commonality™
The silo problem documented in the 2006 roundtable — and in every technology era since — has a structural counterpart in the diagnostic work I have been developing since 1998.
Aligning like an AutoCAD overlay with the silo architecture it encounters, Hansen Strand Commonality™ pierces the walls of misalignment by finding the unique connecting attribute of each silo and creating the transparent connection between seemingly unrelated objectives, incentives, and operational logic. Where the silo sees only its own vertical walls, Strand Commonality maps the horizontal threads that run through all of them simultaneously — the shared conditions that, once identified, make cross-functional alignment possible rather than merely aspirational.
I was reminded of this during a meeting in the fall of 2006 with the head of PWGSC at the Promenade office in what was then Hull — now Gatineau. He showed me a graph of the horizontal disconnect between siloed government initiatives that were actively competing rather than collaborating with each other. The visual was striking not because it was surprising — it confirmed exactly what the archive had been documenting — but because it came from inside the institution responsible for government procurement, in the same season I was chairing the CATA roundtable that produced the 75 to 85 percent failure rate finding. The people closest to the problem could see it clearly. What they lacked was the diagnostic instrument to map the connecting threads before the next initiative repeated the pattern.
That instrument is what Strand Commonality™ provides — and what the 27-year archive has been building the evidence base for.
The Valley of Death Is Still Here — Only Faster: What Hasn’t Changed Since the 2006 Roundtable
Posted on April 15, 2026
0
In November 2006, I chaired a Government Accountability Roundtable hosted by the Canadian Advanced Technology Alliance — and wrote about it for Summit Magazine. The roundtable brought together senior public sector veterans alongside representatives from SAS Canada, Capgemini, CGI, and the University of Ottawa to examine why public sector procurement initiatives were failing at such a persistent and predictable rate.
A government-sponsored study had found that 75 to 85 percent of all initiatives undertaken by government failed — or, in their terminology, ended up in the “valley of death” between strategy and execution. That number was striking. But the explanation is what stayed with me.
Organizations were applying vertical, siloed approaches to horizontal challenges. The very nature of a silo, as I wrote at the time, is that its vertical walls create the barriers to collaborative understanding. Policies, processes, and systems were being designed within functional boundaries, while the actual challenges — supply chain performance, procurement outcomes, stakeholder coordination — cut across every one of those boundaries simultaneously.
That was nearly two decades ago.
The Pattern That Refused to Change
In the years that followed, I watched every major technology wave arrive with the same promise and encounter the same wall.
ERP was going to integrate everything. It didn’t — it automated the silos that already existed. eProcurement was going to transform the buying process. For most organizations it didn’t — it digitized the same misaligned workflows at higher speed. P2P was going to connect purchasing to payment seamlessly. It connected them within functions that still weren’t talking to each other across functions.
The 1998 DND engagement I documented put a specific face on this pattern. When I asked what time of day orders came in and the answer was four o’clock, I wasn’t uncovering a technology problem or a procurement problem. I was uncovering an incentive misalignment that had been invisible to everyone designing the system because nobody had asked the right diagnostic question before the commitment was made. Service technicians were sandbagging orders to meet their call volume targets. Prices were rising from roughly a hundred dollars at nine in the morning to nine hundred by the time the orders arrived. Delivery performance had collapsed to fifty-one percent against a ninety percent contractual requirement — and the people causing it were hitting their own numbers.
The fix was not technological. It was diagnostic — understanding how the organization actually behaved before recommending any change to the system it operated within.
That engagement produced the question I have been asking in every context since: does the model reflect reality, or does it reflect what we assumed reality looked like when we designed it?
Across seven consecutive technology cycles since 1998 — ERP, eProcurement, P2P, cloud suites, analytics, RPA, and now AI — the archive documents the same pattern holding at 80 percent. The technology has changed in every cycle. The failure rate has not.
What AI Changes — and What It Doesn’t
Today the conversation has moved to AI, agentic systems, data readiness, and digital transformation. The vocabulary is different. The speed is different. The scale of the investment is different.
The underlying failure pattern is not.
Organizations are still aligning within silos, defining requirements based on partial perspectives, and assuming shared understanding where it doesn’t exist. What sits beneath that gap consistently is not just process or data — it is differing definitions of success, risk, and accountability across the C-Suite. The CFO, CIO, CDO, CEO, and CPO are each evaluating the same initiative through a different functional lens, and the space between those lenses is where implementations go to die.
They are still committing to models that reflect a simplified version of how they operate — one that is clean enough to build a business case around, but not accurate enough to predict what happens when the system meets operational reality.
What AI changes is the speed at which the gap between the model and the reality becomes visible — and the scale of the damage when it does. In the eras I documented from 2006 onward, a misaligned initiative typically took two to four years to surface as a visible failure. That window gave organizations time to restructure, reframe, or quietly absorb the loss. In the AI era, the same misalignment surfaces in weeks or months — and the consequences are proportionally larger because AI learns from and scales whatever conditions it encounters. If those conditions include misaligned incentives, incomplete data assumptions, and unresolved governance gaps, the system does not simply fail. It propagates the dysfunction at machine speed.
The valley of death has not disappeared. It has accelerated.
The Question That Was Always Missing
The 2006 roundtable identified the problem correctly: vertical approaches to horizontal challenges, functional silos applied to cross-functional realities, the failure to understand the interdependent requirements of all stakeholders before committing to a system designed to serve them.
What it did not explicitly answer was when that understanding needed to happen.
That question turns out to be the only one that matters.
Either you surface the gap between how the organization thinks it operates and how it actually behaves under operational pressure before the commitment is made — or you discover it after the outcome reveals what was missed. In practice, those are still the only two options. Everything between them is the expensive fiction that change management and post-commitment governance can substitute for pre-commitment readiness assessment.
They cannot. The archive is categorical on this point. Across every technology era since 1998, the determinant of outcome was a pre-commitment condition — governance architecture, decision authority alignment, incentive structure, cross-functional ontological compatibility. In no case was the platform capability or the change management investment the primary variable. The conditions were either right before the commitment or they were not.
That is the work I formalized as Phase 0™ — a pre-commitment diagnostic that measures governance, incentives, and cross-functional reality before any major technology or sourcing decision is approved. Not after the consequences arrive. Before the commitment is made, in the only window where the outcome is still optimally changeable.
The Continuity the Market Keeps Ignoring
There is a consistent tendency in every new technology era to treat the current challenge as unprecedented — as something requiring a new framework, a new methodology, a new generation of tools. That tendency is not wrong about the technology. It is wrong about the organizational problem the technology is being asked to solve.
The organizational problem was documented in a government-sponsored study in 2006. It was documented in a DND engagement in 1998. It was documented in the Virginia eVA program across a twenty-four year period. It is being documented now in AI implementation failure rates that are indistinguishable from the eProcurement failure rates I was writing about for Summit Magazine two decades ago.
The language has changed. The technology has evolved. The speed has increased.
The gap between how organizations think they operate and how they actually behave in practice has not closed.
By the time execution begins, the outcome is largely determined.
Final Takeaway: The valley of death was never an execution problem. It was always a pre-commitment problem. The organizations that understood that in 2006 are the ones with 24-year proof cases. The organizations that are learning it for the first time in 2026 are learning it at AI speed.
Which raises a practical question — where does your organization actually sit right now?
Jon Hansen is the founder of Hansen Models™ and Procurement Insights — 27 years, 3,300+ documents, zero vendor sponsorships.
© 2026 Procurement Insights. All rights reserved.
-30-
A Note on Hansen Strand Commonality™
The silo problem documented in the 2006 roundtable — and in every technology era since — has a structural counterpart in the diagnostic work I have been developing since 1998.
Aligning like an AutoCAD overlay with the silo architecture it encounters, Hansen Strand Commonality™ pierces the walls of misalignment by finding the unique connecting attribute of each silo and creating the transparent connection between seemingly unrelated objectives, incentives, and operational logic. Where the silo sees only its own vertical walls, Strand Commonality maps the horizontal threads that run through all of them simultaneously — the shared conditions that, once identified, make cross-functional alignment possible rather than merely aspirational.
I was reminded of this during a meeting in the fall of 2006 with the head of PWGSC at the Promenade office in what was then Hull — now Gatineau. He showed me a graph of the horizontal disconnect between siloed government initiatives that were actively competing rather than collaborating with each other. The visual was striking not because it was surprising — it confirmed exactly what the archive had been documenting — but because it came from inside the institution responsible for government procurement, in the same season I was chairing the CATA roundtable that produced the 75 to 85 percent failure rate finding. The people closest to the problem could see it clearly. What they lacked was the diagnostic instrument to map the connecting threads before the next initiative repeated the pattern.
That instrument is what Strand Commonality™ provides — and what the 27-year archive has been building the evidence base for.
Share this:
Related