Published on Procurement Insights | Jon Hansen
Plato described prisoners who had spent their entire lives watching shadows on a cave wall. They never saw the objects casting the shadows — only the projections. Over time, they became expert shadow-readers. They built ranking systems for shadows. They rewarded whoever predicted the next shadow most accurately.
They were not wrong to do what they did. They were working with the only evidence available to them.
I have been thinking about that allegory a great deal lately — because after eighteen years of independent procurement archive documentation and nearly three decades of watching the same failure pattern repeat across every major technology era, I believe it describes something real about how the procurement technology industry evaluates itself.
The Statistic That Refuses to Move
Depending on the source and methodology, somewhere between 70% and 80% of procurement technology implementations fail to deliver the outcomes originally promised.
That number has been cited, debated, qualified, and occasionally disputed. But it has not moved in any meaningful direction.
Think about what that means structurally.
The industry has moved through ERP, eProcurement, spend analytics, supplier networks, cloud-based platforms, and now AI-driven procurement architecture. Each wave brought genuine capability advances. Each wave attracted serious investment, serious talent, and serious analytical attention.
The outcome statistic remained.
If technology capability were the primary driver of implementation success, that pattern would be difficult to explain. Platforms are measurably more capable today than they were in 2005. Implementation methodology has matured. Integration tooling has improved substantially.
And yet the number stays where it is.
What We Have Been Measuring
The procurement technology industry has built remarkably sophisticated frameworks for evaluating one category of things: vendor capability.
Platform functionality. Architecture. AI readiness scores. Roadmap depth. Ecosystem maturity. Integration completeness. Reference customer satisfaction.
These are real. They matter. Evaluating them is legitimate work.
But they are — to borrow Plato’s frame for a moment — the shadows.
The objects casting those shadows sit somewhere most evaluations never look: inside the organization doing the implementing.
Decision rights. Incentive structures. Governance models. Process maturity. Data quality and ownership. Behavioral patterns baked into institutional memory. The organization’s actual capacity to absorb and sustain operational change.
Those factors are harder to see. They do not fit neatly into a scoring rubric. They require a different kind of analysis — longitudinal, behavioral, structural — that vendor evaluation frameworks were simply not designed to produce.
And so the industry has continued measuring the shadows with increasing precision, while the objects casting them remain largely unexamined.
What the Archive Documents
Across the Procurement Insights archive — more than 3,300 published documents spanning eighteen years and every significant technology era — one pattern appears with a consistency that is difficult to dismiss as coincidence:
The failure was almost never inside the software.
It was inside the system receiving it.
Virginia’s eVA program. Large enterprise supply chain transformations. Public sector procurement reform initiatives. The implementation stories that ended badly share a structural signature: technology deployed into an organizational system whose underlying design was never fully mapped before go-live.
When implementations struggled, the industry’s instinct was to look at the visible evidence — adoption rates, change management execution, training coverage, user interface complaints. Those are the shadows. They are observable and measurable.
The deeper layer — whether the organization had clearly defined decision rights, whether incentive structures were aligned with the outcomes the technology was meant to produce, whether governance models could actually absorb the behavioral change the platform required — was rarely the subject of the formal evaluation.
And so the diagnosis was incomplete. And so the pattern repeated.
The Question the Industry Rarely Asks First
For three decades, the central question driving procurement technology evaluation has been some version of:
Which platform is most capable of delivering the outcomes we need?
That is a legitimate question. It is also the second question.
The first question — the one that determines whether the second question is even worth asking — is this:
Is this organization structurally ready to succeed with any platform at all?
Those are not variations on the same inquiry. They require different evidence, different methodologies, and different disciplines to answer. An organization can select the highest-rated platform in the market and still fail, if the structural conditions for success were never assessed and addressed before implementation began.
This is not a novel observation. Practitioners who have been close to implementation failures have known it for years. What has been slower to develop is a formal, repeatable diagnostic methodology for answering that second question before the first one is acted on — and an industry culture willing to treat readiness assessment as a prerequisite rather than an afterthought.
Why the Cave Persists
Plato’s allegory has one detail that tends to get overlooked in casual reference.
When the prisoner who escapes the cave returns to describe what he has seen — the real objects, the fire, the sunlight — the others do not celebrate him. They are not persuaded. Their entire understanding of reality was constructed around the shadows, and someone suggesting the shadows are not the thing is not a revelation. It is a disruption.
Industries behave the same way.
The analyst frameworks, the evaluation rubrics, the RFP structures, the vendor selection methodologies that dominate procurement technology decision-making today were all built around the shadows. They are deeply embedded in institutional practice. They generate credible-looking outputs. They have entire professional ecosystems built around interpreting them.
Suggesting that the primary determinant of implementation success sits upstream of all that infrastructure — in organizational readiness diagnostics that most current evaluation frameworks do not address — is not an argument that lands easily, regardless of the evidence supporting it.
That is not an accusation directed at any individual firm or analyst. It is a structural observation about how industries organize around measurable proxies and build durable practice around them, even when the proxies are incomplete.
What Changes When AI Enters the System
The arrival of AI-driven procurement platforms adds a layer of urgency to this discussion that did not exist five years ago.
AI accelerates processes. It does not resolve the structural conditions those processes operate within.
An organization with unclear decision rights and misaligned incentives that implements an AI procurement platform does not get a cleaner version of the same problems. It gets a faster, more automated version of them — with less human friction to slow the damage and more institutional momentum behind the trajectory.
The case for answering the readiness question before the vendor selection question has always been strong. In an AI-accelerated implementation environment, it becomes difficult to responsibly argue otherwise.
The Stubborn Statistic
The 70–80% failure rate has survived thirty years of technology advancement, methodology refinement, and market maturation because it was never primarily a technology problem.
It was a readiness problem. A structural problem. A problem that lives in the space between what organizations project they can absorb and what they can actually sustain.
The shadows are measurable and the measurement frameworks are sophisticated. That is not the issue.
The issue is that measuring shadows with increasing precision has never been the same thing as understanding what is casting them.
Postscript: A Live Example, Published This Week
This piece was essentially complete when a Gartner LinkedIn post landed in my feed that I could not ignore.
The post promotes an upcoming webinar titled “Heads of Enterprise Architecture Must Rewire EA for AI.” The accompanying diagram maps a progression from Traditional EA to AI-enabled EA across eight discrete steps — assessing AI potential, organizing for AI-augmented architecture, identifying high-value AI tools, auditing EA tools for AI readiness, and so on through to proving AI-enabled value.
It is a thoughtful framework. The work it describes is real and necessary.
But scan that eight-step pathway carefully and you will notice what is absent.
Not one step asks whether the organization is structurally ready to absorb the transformation being mapped. Decision rights, incentive alignment, governance capacity, behavioral absorptive capacity — the factors that the archive documents as the primary determinants of implementation outcomes — do not appear anywhere in the progression.
There is one step that uses the word “readiness”: Audit EA tools for AI readiness.
The readiness question gets pointed at the tools. Not at the organization receiving them.
That is not a criticism of Gartner’s analysts, who are doing what their framework is designed to do. It is an illustration — published this week, in real time — of the structural dynamic this piece describes.
The shadows are being mapped with increasing precision and sophistication.
The objects casting them remain off the diagram.
Jon Hansen is the founder of Procurement Insights and the creator of the Hansen Method™ and Hansen Fit Score™ frameworks. The Procurement Insights archive spans 2007 to present, with more than 3,300 published documents covering every major procurement technology era.
-30-
The Shadows on the Wall: Why the ProcureTech Failure Rate Has Survived Every Technology Era
Posted on March 7, 2026
0
Published on Procurement Insights | Jon Hansen
Plato described prisoners who had spent their entire lives watching shadows on a cave wall. They never saw the objects casting the shadows — only the projections. Over time, they became expert shadow-readers. They built ranking systems for shadows. They rewarded whoever predicted the next shadow most accurately.
They were not wrong to do what they did. They were working with the only evidence available to them.
I have been thinking about that allegory a great deal lately — because after eighteen years of independent procurement archive documentation and nearly three decades of watching the same failure pattern repeat across every major technology era, I believe it describes something real about how the procurement technology industry evaluates itself.
The Statistic That Refuses to Move
Depending on the source and methodology, somewhere between 70% and 80% of procurement technology implementations fail to deliver the outcomes originally promised.
That number has been cited, debated, qualified, and occasionally disputed. But it has not moved in any meaningful direction.
Think about what that means structurally.
The industry has moved through ERP, eProcurement, spend analytics, supplier networks, cloud-based platforms, and now AI-driven procurement architecture. Each wave brought genuine capability advances. Each wave attracted serious investment, serious talent, and serious analytical attention.
The outcome statistic remained.
If technology capability were the primary driver of implementation success, that pattern would be difficult to explain. Platforms are measurably more capable today than they were in 2005. Implementation methodology has matured. Integration tooling has improved substantially.
And yet the number stays where it is.
What We Have Been Measuring
The procurement technology industry has built remarkably sophisticated frameworks for evaluating one category of things: vendor capability.
Platform functionality. Architecture. AI readiness scores. Roadmap depth. Ecosystem maturity. Integration completeness. Reference customer satisfaction.
These are real. They matter. Evaluating them is legitimate work.
But they are — to borrow Plato’s frame for a moment — the shadows.
The objects casting those shadows sit somewhere most evaluations never look: inside the organization doing the implementing.
Decision rights. Incentive structures. Governance models. Process maturity. Data quality and ownership. Behavioral patterns baked into institutional memory. The organization’s actual capacity to absorb and sustain operational change.
Those factors are harder to see. They do not fit neatly into a scoring rubric. They require a different kind of analysis — longitudinal, behavioral, structural — that vendor evaluation frameworks were simply not designed to produce.
And so the industry has continued measuring the shadows with increasing precision, while the objects casting them remain largely unexamined.
What the Archive Documents
Across the Procurement Insights archive — more than 3,300 published documents spanning eighteen years and every significant technology era — one pattern appears with a consistency that is difficult to dismiss as coincidence:
The failure was almost never inside the software.
It was inside the system receiving it.
Virginia’s eVA program. Large enterprise supply chain transformations. Public sector procurement reform initiatives. The implementation stories that ended badly share a structural signature: technology deployed into an organizational system whose underlying design was never fully mapped before go-live.
When implementations struggled, the industry’s instinct was to look at the visible evidence — adoption rates, change management execution, training coverage, user interface complaints. Those are the shadows. They are observable and measurable.
The deeper layer — whether the organization had clearly defined decision rights, whether incentive structures were aligned with the outcomes the technology was meant to produce, whether governance models could actually absorb the behavioral change the platform required — was rarely the subject of the formal evaluation.
And so the diagnosis was incomplete. And so the pattern repeated.
The Question the Industry Rarely Asks First
For three decades, the central question driving procurement technology evaluation has been some version of:
Which platform is most capable of delivering the outcomes we need?
That is a legitimate question. It is also the second question.
The first question — the one that determines whether the second question is even worth asking — is this:
Is this organization structurally ready to succeed with any platform at all?
Those are not variations on the same inquiry. They require different evidence, different methodologies, and different disciplines to answer. An organization can select the highest-rated platform in the market and still fail, if the structural conditions for success were never assessed and addressed before implementation began.
This is not a novel observation. Practitioners who have been close to implementation failures have known it for years. What has been slower to develop is a formal, repeatable diagnostic methodology for answering that second question before the first one is acted on — and an industry culture willing to treat readiness assessment as a prerequisite rather than an afterthought.
Why the Cave Persists
Plato’s allegory has one detail that tends to get overlooked in casual reference.
When the prisoner who escapes the cave returns to describe what he has seen — the real objects, the fire, the sunlight — the others do not celebrate him. They are not persuaded. Their entire understanding of reality was constructed around the shadows, and someone suggesting the shadows are not the thing is not a revelation. It is a disruption.
Industries behave the same way.
The analyst frameworks, the evaluation rubrics, the RFP structures, the vendor selection methodologies that dominate procurement technology decision-making today were all built around the shadows. They are deeply embedded in institutional practice. They generate credible-looking outputs. They have entire professional ecosystems built around interpreting them.
Suggesting that the primary determinant of implementation success sits upstream of all that infrastructure — in organizational readiness diagnostics that most current evaluation frameworks do not address — is not an argument that lands easily, regardless of the evidence supporting it.
That is not an accusation directed at any individual firm or analyst. It is a structural observation about how industries organize around measurable proxies and build durable practice around them, even when the proxies are incomplete.
What Changes When AI Enters the System
The arrival of AI-driven procurement platforms adds a layer of urgency to this discussion that did not exist five years ago.
AI accelerates processes. It does not resolve the structural conditions those processes operate within.
An organization with unclear decision rights and misaligned incentives that implements an AI procurement platform does not get a cleaner version of the same problems. It gets a faster, more automated version of them — with less human friction to slow the damage and more institutional momentum behind the trajectory.
The case for answering the readiness question before the vendor selection question has always been strong. In an AI-accelerated implementation environment, it becomes difficult to responsibly argue otherwise.
The Stubborn Statistic
The 70–80% failure rate has survived thirty years of technology advancement, methodology refinement, and market maturation because it was never primarily a technology problem.
It was a readiness problem. A structural problem. A problem that lives in the space between what organizations project they can absorb and what they can actually sustain.
The shadows are measurable and the measurement frameworks are sophisticated. That is not the issue.
The issue is that measuring shadows with increasing precision has never been the same thing as understanding what is casting them.
Postscript: A Live Example, Published This Week
This piece was essentially complete when a Gartner LinkedIn post landed in my feed that I could not ignore.
The post promotes an upcoming webinar titled “Heads of Enterprise Architecture Must Rewire EA for AI.” The accompanying diagram maps a progression from Traditional EA to AI-enabled EA across eight discrete steps — assessing AI potential, organizing for AI-augmented architecture, identifying high-value AI tools, auditing EA tools for AI readiness, and so on through to proving AI-enabled value.
It is a thoughtful framework. The work it describes is real and necessary.
But scan that eight-step pathway carefully and you will notice what is absent.
Not one step asks whether the organization is structurally ready to absorb the transformation being mapped. Decision rights, incentive alignment, governance capacity, behavioral absorptive capacity — the factors that the archive documents as the primary determinants of implementation outcomes — do not appear anywhere in the progression.
There is one step that uses the word “readiness”: Audit EA tools for AI readiness.
The readiness question gets pointed at the tools. Not at the organization receiving them.
That is not a criticism of Gartner’s analysts, who are doing what their framework is designed to do. It is an illustration — published this week, in real time — of the structural dynamic this piece describes.
The shadows are being mapped with increasing precision and sophistication.
The objects casting them remain off the diagram.
Jon Hansen is the founder of Procurement Insights and the creator of the Hansen Method™ and Hansen Fit Score™ frameworks. The Procurement Insights archive spans 2007 to present, with more than 3,300 published documents covering every major procurement technology era.
-30-
Share this:
Related