Jon Hansen | Procurement Insights | February 2026
This is not an argument against analyst models. It’s an observation about what happens to them when the technology they evaluate starts evaluating the organizations that buy it.
THE SHORT VERSION FOR BUSY EXECUTIVES
For more than two decades, procurement and supply chain technology decisions have been shaped by equation-based models: Magic Quadrants, Forrester Waves, solution maps, vendor grids, and comparative scorecards. They were designed to answer a specific question: which technology is “best” based on feature completeness, market presence, and execution capability?
That question made sense in a pre-AI world — one where technology behaved deterministically, implementations followed linear plans, and governance could be assumed rather than proven.
Agentic AI quietly breaks that assumption. And in doing so, it exposes why these models are no longer sufficient — not because they are wrong, but because they are incomplete. They remain useful for understanding market capability. They are no longer decisive for predicting outcomes. The documented 80% implementation failure rate in procurement technology initiatives didn’t emerge because organizations chose the wrong quadrant position. It emerged because no one measured whether the organization could absorb what it was buying.
Agentic AI makes that omission impossible to ignore.
THE MODELS WEREN’T BUILT FOR THIS MOMENT
Quadrants and Waves are optimized for capability comparison. They assume that organizations are structurally ready to absorb what they buy, that decision rights are already defined, that governance exists and will sort itself out during implementation, and that human oversight is implicit rather than engineered. In other words, they evaluate technology in isolation from the organizational systems required to govern it.
That worked — imperfectly — when technology executed predefined workflows. We documented the consequences of that imperfection across 18 years of Procurement Insights archives: implementations that met every functional requirement on the vendor scorecard and still failed, not because the technology was deficient but because the organization couldn’t govern what the technology demanded of it. The pattern repeated across ERP rollouts, source-to-pay deployments, and every generation of procurement technology since 2007. The failure was never in the quadrant position. The failure was in what the quadrant didn’t measure.
Agentic AI doesn’t repeat that pattern. It accelerates it.
WHAT AGENTIC AI FORCES US TO ASK
When you introduce agentic systems — systems that plan, act, adapt, and collaborate — the dominant risk is no longer whether the tool works. The dominant risk becomes organizational. Who is authorized to act when probability replaces certainty? How are decisions escalated when outcomes deviate from plans? What happens when two optimized agents disagree across organizational boundaries? Can the organization prove, months later, who decided, why, and with what authority?
These are not technology questions. They are governance questions. And no capability-comparison model — regardless of how rigorously constructed — was designed to answer them.
This is where the history matters. In 1998, through government-funded SR&ED research for Canada’s Department of National Defence, we built the Relational Acquisition Model — a system that achieved 97.3% delivery accuracy precisely because it measured organizational decision architecture before deploying capability. The lesson from that work, validated through a $12 million company exit in 2001 and two decades of subsequent pattern documentation, is that readiness isn’t a phase you can skip. It’s the phase that determines whether every subsequent phase holds.
Agentic AI is now teaching that lesson to every organization simultaneously — whether they’re prepared for it or not.
THE QUIET INVERSION
Here’s the inversion most people haven’t noticed yet.
AI no longer depends on analyst models to validate its value. AI exposes whether the organization itself is viable.
Agentic AI doesn’t politely wait for governance to catch up. It surfaces governance — or the absence of it — immediately. This is why we’re seeing brilliant pilots fail at scale, “AI-ready” architectures collapse under real-world pressure, and organizations blame the model when the failure was structural. The technology didn’t break. The readiness wasn’t there.
We documented this exact dynamic in our recent multimodel assessment of Supply Chain Digital’s Top 100 Leaders’ Organizations. Of 100 leaders profiled — all recognized for strategic influence, operational scale, and technological innovation — only 6 led organizations whose public narratives signaled readiness for the governance demands that agentic AI imposes. The remaining 94 sit in bands where organizational readiness either hasn’t entered the conversation or where structural barriers reduce the likelihood of engagement with governance assessment frameworks.
The Top 100 list measures capability. The assessment measured readiness. They produced fundamentally different rankings — because they measure fundamentally different things.
WHY THIS DOESN’T KILL QUADRANTS — IT DEMOTES THEM
Magic Quadrants, Forrester Waves, and solution maps still have value. But their role is changing. They are becoming second-order tools: useful for understanding market capability, helpful for narrowing functional options, insufficient for predicting outcomes. What they don’t tell you — and never have — is whether your organization can survive what the technology demands of it.
That’s not a design flaw. It’s a scope limitation. These models were built to compare vendors against each other, not to assess whether the buying organization is structurally prepared for what it’s purchasing. In a world where technology executed instructions, that limitation was tolerable. In a world where technology makes decisions, exercises judgment, and operates across organizational boundaries, that limitation becomes the gap where failure lives.
Agentic AI makes that gap measurable.
FROM “WHICH TOOL?” TO “ARE WE READY?”
The most important shift underway isn’t technological. It’s about what we treat as evidence.
Procurement and supply chain leaders are moving from asking “Which solution should we buy?” to asking “What would fail first if we deployed this tomorrow?” The first question can be answered by a quadrant. The second cannot. It requires readiness measurement — governance operability, decision authority clarity, escalation realism, and human oversight at the point of decision.
This is the terrain where Phase 0 lives. Not as a replacement for analyst models, but as the prerequisite that determines whether analyst models produce useful guidance or expensive misdirection. An organization that scores high on a capability grid but low on governance readiness isn’t well-positioned — it’s accelerating toward a failure it can’t yet see.
WHAT AGENTIC AI IS REALLY SAYING
Agentic AI isn’t arguing with analysts. It’s bypassing them.
By making outcomes contingent on governance rather than features, it renders capability-only models incomplete by definition. Not obsolete. Not wrong. Just no longer decisive.
The future of procurement technology selection won’t be led by who scores highest on a grid. It will be led by who can prove — under pressure — that their organization is ready for what the grid recommends.
Stop ranking capability. Start measuring readiness.
That’s not a slogan. It’s what agentic AI is already telling us — whether we’re listening or not.
-30-
Jon Hansen is the founder and CEO of Hansen Models (1001279896 Ontario Inc.) and creator of the Hansen Method. He has operated Procurement Insights since 2007, maintaining an independent archive of over 3,500 posts documenting procurement technology patterns across every major platform cycle. His foundational research began with government-funded SR&ED work in 1998, developing systems that preceded current AI developments by decades.
Hansen Models | RAM 2025™ | 100% Independent
Agentic AI Didn’t Kill the Magic Quadrant — It Made It Optional
Posted on February 9, 2026
0
Jon Hansen | Procurement Insights | February 2026
This is not an argument against analyst models. It’s an observation about what happens to them when the technology they evaluate starts evaluating the organizations that buy it.
THE SHORT VERSION FOR BUSY EXECUTIVES
For more than two decades, procurement and supply chain technology decisions have been shaped by equation-based models: Magic Quadrants, Forrester Waves, solution maps, vendor grids, and comparative scorecards. They were designed to answer a specific question: which technology is “best” based on feature completeness, market presence, and execution capability?
That question made sense in a pre-AI world — one where technology behaved deterministically, implementations followed linear plans, and governance could be assumed rather than proven.
Agentic AI quietly breaks that assumption. And in doing so, it exposes why these models are no longer sufficient — not because they are wrong, but because they are incomplete. They remain useful for understanding market capability. They are no longer decisive for predicting outcomes. The documented 80% implementation failure rate in procurement technology initiatives didn’t emerge because organizations chose the wrong quadrant position. It emerged because no one measured whether the organization could absorb what it was buying.
Agentic AI makes that omission impossible to ignore.
THE MODELS WEREN’T BUILT FOR THIS MOMENT
Quadrants and Waves are optimized for capability comparison. They assume that organizations are structurally ready to absorb what they buy, that decision rights are already defined, that governance exists and will sort itself out during implementation, and that human oversight is implicit rather than engineered. In other words, they evaluate technology in isolation from the organizational systems required to govern it.
That worked — imperfectly — when technology executed predefined workflows. We documented the consequences of that imperfection across 18 years of Procurement Insights archives: implementations that met every functional requirement on the vendor scorecard and still failed, not because the technology was deficient but because the organization couldn’t govern what the technology demanded of it. The pattern repeated across ERP rollouts, source-to-pay deployments, and every generation of procurement technology since 2007. The failure was never in the quadrant position. The failure was in what the quadrant didn’t measure.
Agentic AI doesn’t repeat that pattern. It accelerates it.
WHAT AGENTIC AI FORCES US TO ASK
When you introduce agentic systems — systems that plan, act, adapt, and collaborate — the dominant risk is no longer whether the tool works. The dominant risk becomes organizational. Who is authorized to act when probability replaces certainty? How are decisions escalated when outcomes deviate from plans? What happens when two optimized agents disagree across organizational boundaries? Can the organization prove, months later, who decided, why, and with what authority?
These are not technology questions. They are governance questions. And no capability-comparison model — regardless of how rigorously constructed — was designed to answer them.
This is where the history matters. In 1998, through government-funded SR&ED research for Canada’s Department of National Defence, we built the Relational Acquisition Model — a system that achieved 97.3% delivery accuracy precisely because it measured organizational decision architecture before deploying capability. The lesson from that work, validated through a $12 million company exit in 2001 and two decades of subsequent pattern documentation, is that readiness isn’t a phase you can skip. It’s the phase that determines whether every subsequent phase holds.
Agentic AI is now teaching that lesson to every organization simultaneously — whether they’re prepared for it or not.
THE QUIET INVERSION
Here’s the inversion most people haven’t noticed yet.
AI no longer depends on analyst models to validate its value. AI exposes whether the organization itself is viable.
Agentic AI doesn’t politely wait for governance to catch up. It surfaces governance — or the absence of it — immediately. This is why we’re seeing brilliant pilots fail at scale, “AI-ready” architectures collapse under real-world pressure, and organizations blame the model when the failure was structural. The technology didn’t break. The readiness wasn’t there.
We documented this exact dynamic in our recent multimodel assessment of Supply Chain Digital’s Top 100 Leaders’ Organizations. Of 100 leaders profiled — all recognized for strategic influence, operational scale, and technological innovation — only 6 led organizations whose public narratives signaled readiness for the governance demands that agentic AI imposes. The remaining 94 sit in bands where organizational readiness either hasn’t entered the conversation or where structural barriers reduce the likelihood of engagement with governance assessment frameworks.
The Top 100 list measures capability. The assessment measured readiness. They produced fundamentally different rankings — because they measure fundamentally different things.
WHY THIS DOESN’T KILL QUADRANTS — IT DEMOTES THEM
Magic Quadrants, Forrester Waves, and solution maps still have value. But their role is changing. They are becoming second-order tools: useful for understanding market capability, helpful for narrowing functional options, insufficient for predicting outcomes. What they don’t tell you — and never have — is whether your organization can survive what the technology demands of it.
That’s not a design flaw. It’s a scope limitation. These models were built to compare vendors against each other, not to assess whether the buying organization is structurally prepared for what it’s purchasing. In a world where technology executed instructions, that limitation was tolerable. In a world where technology makes decisions, exercises judgment, and operates across organizational boundaries, that limitation becomes the gap where failure lives.
Agentic AI makes that gap measurable.
FROM “WHICH TOOL?” TO “ARE WE READY?”
The most important shift underway isn’t technological. It’s about what we treat as evidence.
Procurement and supply chain leaders are moving from asking “Which solution should we buy?” to asking “What would fail first if we deployed this tomorrow?” The first question can be answered by a quadrant. The second cannot. It requires readiness measurement — governance operability, decision authority clarity, escalation realism, and human oversight at the point of decision.
This is the terrain where Phase 0 lives. Not as a replacement for analyst models, but as the prerequisite that determines whether analyst models produce useful guidance or expensive misdirection. An organization that scores high on a capability grid but low on governance readiness isn’t well-positioned — it’s accelerating toward a failure it can’t yet see.
WHAT AGENTIC AI IS REALLY SAYING
Agentic AI isn’t arguing with analysts. It’s bypassing them.
By making outcomes contingent on governance rather than features, it renders capability-only models incomplete by definition. Not obsolete. Not wrong. Just no longer decisive.
The future of procurement technology selection won’t be led by who scores highest on a grid. It will be led by who can prove — under pressure — that their organization is ready for what the grid recommends.
Stop ranking capability. Start measuring readiness.
That’s not a slogan. It’s what agentic AI is already telling us — whether we’re listening or not.
-30-
Jon Hansen is the founder and CEO of Hansen Models (1001279896 Ontario Inc.) and creator of the Hansen Method. He has operated Procurement Insights since 2007, maintaining an independent archive of over 3,500 posts documenting procurement technology patterns across every major platform cycle. His foundational research began with government-funded SR&ED work in 1998, developing systems that preceded current AI developments by decades.
Hansen Models | RAM 2025™ | 100% Independent
Share this:
Related