Three archive cases from 2008–2010. One pattern. And why Gartner’s 2026 AI warning arrives eighteen years late.
By Jon Hansen | Procurement Insights | March 2026
On March 6, 2026, Gartner’s weekly C-Suite newsletter — distributed to 2.1 million followers — featured a piece by Gabriela Vogel, Vice President Analyst at Gartner, introducing the concept of “organizational entrenchment” as the hidden force sabotaging AI transformation.
Her framing was precise: entrenchment is the calcification of an organization’s systems, processes, structures, incentives, and routines. Layer AI onto an entrenched organization, she argued, and you don’t fix the problems. You expose them.
It is a rigorous and important insight. It is also an insight the Procurement Insights archive documented — across three separate, unrelated cases — beginning in 2008.
This post is not a critique of Gabriela Vogel or Gartner. It is something more useful: a demonstration that the pattern she has named in 2026 has a documented history stretching back nearly two decades, and that history carries evidence no analyst briefing can replicate.
The Pattern
The archive calls it Hansen Strand Commonality™ — the identification of a structural pattern appearing across seemingly unrelated domains, organizations, and time periods.
What Gartner calls organizational entrenchment, the archive documented as something more specific: the systematic failure that occurs when an organization automates its visible workflow while leaving its underlying incentive structures, decision rights, timing logic, and human behaviors untouched.
The software goes live. The root problem stays in place. And the organization calls it an adoption issue.
Three cases from the archive illustrate this with dated, publicly verifiable evidence.
Case 1: Mattel (2008)
In February 2008, a senior Supply Chain Director from the global fashion and apparel industry contributed to a Procurement Insights Q&A on multiple supply chain networks. The discussion turned to Mattel’s product recall crisis, and the contributor named the mechanism with unusual precision.
Exclusive supply relationships, she argued, create a progressive erosion of standards that is invisible until it becomes catastrophic. Vendors and auditors within a locked supply chain recognize their position as critical and either take risks or lose focus. The organization, meanwhile, faces a calculation it cannot win honestly: the cost of holding fast to quality standards becomes greater than the risk of accepting an inferior product — until the inferior product ships and the crisis arrives.
I described this at the time as a “self-inflicted alliance” — an organization that had drifted into structural dependency not through external force but through the accumulated comfort of known relationships.
The calcification of systems, processes, incentives, and routines. Organizational entrenchment. Documented in February 2008, in the context of a toy manufacturer’s supply chain, eighteen years before Gartner named it.
The deeper point I raised then remains the most important one: Mattel’s options for developing alternative supply channels were not unlimited. Indigenous market factors — supplier concentration, logistics constraints, contractual volume commitments — had compressed the realistic range of alternatives. The organization wasn’t simply choosing comfort over standards. It was operating inside a system whose structure had quietly eliminated the ability to choose otherwise.
That is not a technology failure. That is a misread system.
Case 2: North Carolina’s At Your Service Program (2008 and 2010)
In 2004 I conducted detailed research on North Carolina’s At Your Service procurement platform — an Ariba-based initiative designed to centralize state purchasing and drive contracted savings. What impressed me at the time had nothing to do with the technology. It was the State’s approach to managing resistance from its higher education institutions.
Rather than mandating compliance, North Carolina executed a Memorandum of Understanding with its universities in November 2004 — granting what I called “collaborative autonomy.” Institutions could maintain their existing purchasing programs, but were required to cross-reference the State’s centrally negotiated contracts. If a university could procure at better value through its own relationships, it could do so — provided it shared that information with the State.
I published this assessment in early 2005 and described it as an approach that would build a dynamic intelligence base capable of avoiding the very problems centrally mandated compliance systems typically produce.
By November 2010, I was writing a different post. The North Carolina program had failed.
My assessment of what went wrong was direct: the anticipated collaborative process between key stakeholders never materialized. The realization of targeted savings became technology-driven rather than people-driven. State auditor reports from the same period corroborated the finding independently — documenting weaknesses in enforcing term contracts, delayed loading of procurement data into the system, and limited detection of noncompliance. The prediction made in 2005 and the auditor’s findings in 2010 pointed to the same root cause from different vantage points. And when that happens — when an organization leads with technology — the people adapt to how the technology works rather than the technology adapting to the way people work in the real world.
Six years. One prediction. One documented outcome.
And the post connected the failure directly to the Canadian Department of National Defence — same root cause, different geography, different decade. At DND, the absence of collaborative intelligence between the main buying group and the bases they served had resulted in the Department purchasing MRO materials at an average premium of 157% above market rate, with next-day delivery performance hovering around 50%.
But the institutional disconnect was only the visible layer. Underneath it, a more granular behavioral dynamic was driving the outcome. Service technicians were incentivized to maximize service-call volume — and that incentive caused them to delay parts orders until the end of the day. End-of-day ordering collided with customs processing windows. Dynamic pricing meant that a contracted rate valid at 9 a.m. was commercially meaningless by 4 p.m. And the smaller suppliers the system depended on lacked the technical sophistication the process design had assumed they possessed.
No single technician was sabotaging the system. Each was responding rationally to the incentive structure they operated within. The system was producing exactly the outcomes its design guaranteed — just not the outcomes anyone had intended. That is organizational entrenchment at its most precise: not resistance, not malice, but a misread operating system running exactly as designed.
The fix in both cases was not better technology. It was mapping the actual operating system of the organization before touching the technology at all.
Case 3: Virginia’s eVA Program (2008)
In March 2008 I documented why Virginia’s eVA program — cited as a top performer in the PEW Center’s Grading the States report — had succeeded where so many comparable initiatives had failed.
The PEW report noted that what made Virginia’s showing impressive was that the Commonwealth had avoided “formulas” and had instead focused on the “harder work of asking why goals and targets weren’t being met.” Based on that understanding, Virginia actively sought to address the underlying problems.
One specific decision illustrated the principle more clearly than any performance metric. Virginia had evaluated the introduction of digital signatures into its procurement process and shelved the initiative — because of the potential negative effects it would likely have had on the SME and HUB supply base. A technology capability, available and presumably cost-justified, was deliberately not deployed because the organizational assessment said it would harm the agent chain the system depended on.
That is Phase 0 discipline applied at a government level in 2007. It produced an A- rating from the PEW Center and a program that continued generating documented savings for years afterward.
The eVA story is not just a success case. It is a proof of method. Virginia succeeded because it assessed the real system first — the behaviors, incentives, stakeholder relationships, and unintended consequences — before layering technology onto it. Every program that failed around it during the same period failed for the same reason: the technology was deployed into a system whose underlying structure had not been understood.
2026: Gartner Names the Pattern
Gabriela Vogel’s six stress fractures of organizational entrenchment — systems, processes, structures, incentives, routines, and the cultures that calcify them — map directly onto what these three cases documented between 2008 and 2010.
Mattel: incentive structures and supplier routines calcified into structural dependency. North Carolina: collaborative process never materialized, leaving technology to carry weight it was never designed to carry alone. Virginia: explicitly diagnosed the underlying system before deploying technology, and succeeded because of it.
The pattern is not new. The name is new. What the archive provides is dated evidence showing the pattern operating long before AI became the lens through which we now see it.
What is genuinely new in 2026 is the stakes. When the calcified system is a procurement process, the cost of entrenchment is a failed implementation, a wasted budget, and a program that gets quietly decommissioned. When the calcified system is an AI deployment embedded in decision pathways across an enterprise, the cost is compounded by speed. AI doesn’t wait for the entrenchment to surface. It amplifies it.
This is why the archive matters as more than a historical record. The 70–80% implementation failure rate that has persisted across seven technology eras in procurement — documented consistently since 2007 — is not a technology statistic. It is an organizational statistic. And the organizations deploying AI in 2026 are, in most cases, the same organizations whose underlying incentive structures, decision rights, and behavioral patterns were never assessed before the last wave of technology was deployed.
Gartner’s warning is correct and important. But the practitioners who most need to act on it are the same ones who received the same warning — in different language, with different examples — in 2008.
The question the archive poses is not whether organizational entrenchment is real. The Mattel, North Carolina, and Virginia cases established that with dated, publicly verifiable evidence nearly two decades ago.
The question is whether the warning will be heard differently now that AI has raised the cost of ignoring it.
What the Archive Represents
The Procurement Insights archive spans 2007 to the present — more than 3,300 published documents, produced without vendor sponsorship or referral fees. It is not a collection of opinions. It is a longitudinal record of assessments made at one point in time and subsequently validated or refuted by documented outcomes.
The North Carolina case is the clearest example: a 2005 assessment of a program’s design identified the collaborative process as the load-bearing element, and a 2010 post documented exactly what happened when that element failed to materialize.
That is a different kind of evidence than a Magic Quadrant, a Wave report, or a commissioned survey. It is the kind of evidence that can only be produced by an observer who is independent of the commercial relationships that determine what gets published and what stays private.
Hansen Strand Commonality™ is the methodology that connects these dots across time and domain. The Hansen Fit Score™ is the framework that applies the pattern diagnostically — before implementation, not after. And Phase 0 is the discipline that Virginia practiced instinctively in 2007 and that most organizations deploying AI in 2026 have still not made standard practice.
The strand was visible in 2008. It is visible now. The question has never been whether the pattern exists.
The question is whether you map it before you deploy — or discover it afterward.
Jon Hansen is the founder of Hansen Models™ and creator of the Hansen Method™, Hansen Fit Score™, and RAM 2025™ multimodel validation framework. He has operated Procurement Insights as an independent, vendor-neutral editorial platform since 2007. All assessments and content are produced without vendor sponsorship or referral fees. Hansen Fit Score™ assessments for Coupa, SAP Ariba, and Gartner are available through Hansen Models™. The Hansen Models™ website launches the week of March 9th, 2026.
-30-
What Mattel, North Carolina, and Virginia’s eVA Knew About AI Failure Before AI Existed
Posted on March 6, 2026
0
Three archive cases from 2008–2010. One pattern. And why Gartner’s 2026 AI warning arrives eighteen years late.
By Jon Hansen | Procurement Insights | March 2026
On March 6, 2026, Gartner’s weekly C-Suite newsletter — distributed to 2.1 million followers — featured a piece by Gabriela Vogel, Vice President Analyst at Gartner, introducing the concept of “organizational entrenchment” as the hidden force sabotaging AI transformation.
Her framing was precise: entrenchment is the calcification of an organization’s systems, processes, structures, incentives, and routines. Layer AI onto an entrenched organization, she argued, and you don’t fix the problems. You expose them.
It is a rigorous and important insight. It is also an insight the Procurement Insights archive documented — across three separate, unrelated cases — beginning in 2008.
This post is not a critique of Gabriela Vogel or Gartner. It is something more useful: a demonstration that the pattern she has named in 2026 has a documented history stretching back nearly two decades, and that history carries evidence no analyst briefing can replicate.
The Pattern
The archive calls it Hansen Strand Commonality™ — the identification of a structural pattern appearing across seemingly unrelated domains, organizations, and time periods.
What Gartner calls organizational entrenchment, the archive documented as something more specific: the systematic failure that occurs when an organization automates its visible workflow while leaving its underlying incentive structures, decision rights, timing logic, and human behaviors untouched.
The software goes live. The root problem stays in place. And the organization calls it an adoption issue.
Three cases from the archive illustrate this with dated, publicly verifiable evidence.
Case 1: Mattel (2008)
In February 2008, a senior Supply Chain Director from the global fashion and apparel industry contributed to a Procurement Insights Q&A on multiple supply chain networks. The discussion turned to Mattel’s product recall crisis, and the contributor named the mechanism with unusual precision.
Exclusive supply relationships, she argued, create a progressive erosion of standards that is invisible until it becomes catastrophic. Vendors and auditors within a locked supply chain recognize their position as critical and either take risks or lose focus. The organization, meanwhile, faces a calculation it cannot win honestly: the cost of holding fast to quality standards becomes greater than the risk of accepting an inferior product — until the inferior product ships and the crisis arrives.
I described this at the time as a “self-inflicted alliance” — an organization that had drifted into structural dependency not through external force but through the accumulated comfort of known relationships.
The calcification of systems, processes, incentives, and routines. Organizational entrenchment. Documented in February 2008, in the context of a toy manufacturer’s supply chain, eighteen years before Gartner named it.
The deeper point I raised then remains the most important one: Mattel’s options for developing alternative supply channels were not unlimited. Indigenous market factors — supplier concentration, logistics constraints, contractual volume commitments — had compressed the realistic range of alternatives. The organization wasn’t simply choosing comfort over standards. It was operating inside a system whose structure had quietly eliminated the ability to choose otherwise.
That is not a technology failure. That is a misread system.
Case 2: North Carolina’s At Your Service Program (2008 and 2010)
In 2004 I conducted detailed research on North Carolina’s At Your Service procurement platform — an Ariba-based initiative designed to centralize state purchasing and drive contracted savings. What impressed me at the time had nothing to do with the technology. It was the State’s approach to managing resistance from its higher education institutions.
Rather than mandating compliance, North Carolina executed a Memorandum of Understanding with its universities in November 2004 — granting what I called “collaborative autonomy.” Institutions could maintain their existing purchasing programs, but were required to cross-reference the State’s centrally negotiated contracts. If a university could procure at better value through its own relationships, it could do so — provided it shared that information with the State.
I published this assessment in early 2005 and described it as an approach that would build a dynamic intelligence base capable of avoiding the very problems centrally mandated compliance systems typically produce.
By November 2010, I was writing a different post. The North Carolina program had failed.
My assessment of what went wrong was direct: the anticipated collaborative process between key stakeholders never materialized. The realization of targeted savings became technology-driven rather than people-driven. State auditor reports from the same period corroborated the finding independently — documenting weaknesses in enforcing term contracts, delayed loading of procurement data into the system, and limited detection of noncompliance. The prediction made in 2005 and the auditor’s findings in 2010 pointed to the same root cause from different vantage points. And when that happens — when an organization leads with technology — the people adapt to how the technology works rather than the technology adapting to the way people work in the real world.
Six years. One prediction. One documented outcome.
And the post connected the failure directly to the Canadian Department of National Defence — same root cause, different geography, different decade. At DND, the absence of collaborative intelligence between the main buying group and the bases they served had resulted in the Department purchasing MRO materials at an average premium of 157% above market rate, with next-day delivery performance hovering around 50%.
But the institutional disconnect was only the visible layer. Underneath it, a more granular behavioral dynamic was driving the outcome. Service technicians were incentivized to maximize service-call volume — and that incentive caused them to delay parts orders until the end of the day. End-of-day ordering collided with customs processing windows. Dynamic pricing meant that a contracted rate valid at 9 a.m. was commercially meaningless by 4 p.m. And the smaller suppliers the system depended on lacked the technical sophistication the process design had assumed they possessed.
No single technician was sabotaging the system. Each was responding rationally to the incentive structure they operated within. The system was producing exactly the outcomes its design guaranteed — just not the outcomes anyone had intended. That is organizational entrenchment at its most precise: not resistance, not malice, but a misread operating system running exactly as designed.
The fix in both cases was not better technology. It was mapping the actual operating system of the organization before touching the technology at all.
Case 3: Virginia’s eVA Program (2008)
In March 2008 I documented why Virginia’s eVA program — cited as a top performer in the PEW Center’s Grading the States report — had succeeded where so many comparable initiatives had failed.
The PEW report noted that what made Virginia’s showing impressive was that the Commonwealth had avoided “formulas” and had instead focused on the “harder work of asking why goals and targets weren’t being met.” Based on that understanding, Virginia actively sought to address the underlying problems.
One specific decision illustrated the principle more clearly than any performance metric. Virginia had evaluated the introduction of digital signatures into its procurement process and shelved the initiative — because of the potential negative effects it would likely have had on the SME and HUB supply base. A technology capability, available and presumably cost-justified, was deliberately not deployed because the organizational assessment said it would harm the agent chain the system depended on.
That is Phase 0 discipline applied at a government level in 2007. It produced an A- rating from the PEW Center and a program that continued generating documented savings for years afterward.
The eVA story is not just a success case. It is a proof of method. Virginia succeeded because it assessed the real system first — the behaviors, incentives, stakeholder relationships, and unintended consequences — before layering technology onto it. Every program that failed around it during the same period failed for the same reason: the technology was deployed into a system whose underlying structure had not been understood.
2026: Gartner Names the Pattern
Gabriela Vogel’s six stress fractures of organizational entrenchment — systems, processes, structures, incentives, routines, and the cultures that calcify them — map directly onto what these three cases documented between 2008 and 2010.
Mattel: incentive structures and supplier routines calcified into structural dependency. North Carolina: collaborative process never materialized, leaving technology to carry weight it was never designed to carry alone. Virginia: explicitly diagnosed the underlying system before deploying technology, and succeeded because of it.
The pattern is not new. The name is new. What the archive provides is dated evidence showing the pattern operating long before AI became the lens through which we now see it.
What is genuinely new in 2026 is the stakes. When the calcified system is a procurement process, the cost of entrenchment is a failed implementation, a wasted budget, and a program that gets quietly decommissioned. When the calcified system is an AI deployment embedded in decision pathways across an enterprise, the cost is compounded by speed. AI doesn’t wait for the entrenchment to surface. It amplifies it.
This is why the archive matters as more than a historical record. The 70–80% implementation failure rate that has persisted across seven technology eras in procurement — documented consistently since 2007 — is not a technology statistic. It is an organizational statistic. And the organizations deploying AI in 2026 are, in most cases, the same organizations whose underlying incentive structures, decision rights, and behavioral patterns were never assessed before the last wave of technology was deployed.
Gartner’s warning is correct and important. But the practitioners who most need to act on it are the same ones who received the same warning — in different language, with different examples — in 2008.
The question the archive poses is not whether organizational entrenchment is real. The Mattel, North Carolina, and Virginia cases established that with dated, publicly verifiable evidence nearly two decades ago.
The question is whether the warning will be heard differently now that AI has raised the cost of ignoring it.
What the Archive Represents
The Procurement Insights archive spans 2007 to the present — more than 3,300 published documents, produced without vendor sponsorship or referral fees. It is not a collection of opinions. It is a longitudinal record of assessments made at one point in time and subsequently validated or refuted by documented outcomes.
The North Carolina case is the clearest example: a 2005 assessment of a program’s design identified the collaborative process as the load-bearing element, and a 2010 post documented exactly what happened when that element failed to materialize.
That is a different kind of evidence than a Magic Quadrant, a Wave report, or a commissioned survey. It is the kind of evidence that can only be produced by an observer who is independent of the commercial relationships that determine what gets published and what stays private.
Hansen Strand Commonality™ is the methodology that connects these dots across time and domain. The Hansen Fit Score™ is the framework that applies the pattern diagnostically — before implementation, not after. And Phase 0 is the discipline that Virginia practiced instinctively in 2007 and that most organizations deploying AI in 2026 have still not made standard practice.
The strand was visible in 2008. It is visible now. The question has never been whether the pattern exists.
The question is whether you map it before you deploy — or discover it afterward.
Jon Hansen is the founder of Hansen Models™ and creator of the Hansen Method™, Hansen Fit Score™, and RAM 2025™ multimodel validation framework. He has operated Procurement Insights as an independent, vendor-neutral editorial platform since 2007. All assessments and content are produced without vendor sponsorship or referral fees. Hansen Fit Score™ assessments for Coupa, SAP Ariba, and Gartner are available through Hansen Models™. The Hansen Models™ website launches the week of March 9th, 2026.
-30-
Share this:
Related