Kearney just published the 2026 edition of its Assessment of Excellence in Procurement study — a benchmarking exercise the firm has run since 1992, now drawing on five years of longitudinal data across 729 companies. The headline finding is sharp and executive-ready: a ten percent annual performance gap between top-tier and mid-tier procurement organizations now compounds into a forty-six-million-dollar difference in realized value from the same external spend base. AI, in Kearney’s framing, is accelerating that divide rather than narrowing it.
Read carefully, the report is more important than its press release lets on. Not because it is wrong — it is largely correct on the diagnosis — but because it is the latest in a now-undeniable series of major firm publications that have all converged on the same structural observation from different analytical paths. McKinsey on organizational restructuring. Bain on systems, data, and governance readiness. KPMG on why returns vary so widely. BCG on managing AI as a coworker rather than a tool. Accenture on rebuilding the platform. IBM on operating models for autonomous systems. And now Kearney on the widening compounding gap between leaders and laggards.
Every major consulting firm publishing in 2026 has independently identified the same problem. None of them has yet named what they are collectively pointing at.
What the Kearney Study Gets Right
There are three observations in the Kearney report that deserve to be taken seriously.
The first is the compounding-performance observation itself. Mature procurement organizations are pulling further ahead. Weaker organizations are falling further behind. AI is widening rather than closing the spread. That finding is structurally significant because it contradicts the dominant marketing narrative around enterprise AI — the narrative that says AI democratizes capability and lifts the entire procurement function. Kearney’s longitudinal data says the opposite. AI is not equalizing performance. It is amplifying the operating conditions already present inside the enterprise.
The second observation is the framing of procurement excellence as enterprise coordination capability rather than transactional sourcing efficiency. Kearney’s four-pillar model — Team, Demand, Category, Supplier — moves beyond the traditional cost-reduction narrative toward something more structurally significant. The pillars are about how procurement coordinates with the rest of the enterprise, not just about how procurement extracts value from suppliers. That framing is more sophisticated than most procurement benchmarking work, and it is closer to a structural diagnosis than to a capability inventory.
The third observation is the readiness divergence the report documents implicitly. Organizations with mature capabilities are converting AI investments into compounding gains. Organizations without those capabilities are not. That is, in substance, the same readiness-convergence pattern now appearing across MIT, Stanford HAI, Gartner, McKinsey, BCG, and the rest of the major consulting and academic literature.
The Kearney report is evidence that the readiness-as-bottleneck thesis has become the dominant analytical frame in 2026 procurement and enterprise transformation research. That is a significant moment, and the report deserves credit for documenting it.
What the Kearney Study Cannot Answer
The limitation is structural, not methodological. The Kearney study measures procurement performance under load. It does not diagnose the conditions producing the load-bearing behavior itself.
The four pillars sit entirely above the waterline. They evaluate how procurement behaves, what tools and analytics it uses, how well it engages the business and suppliers. They do not interrogate the unresolved inheritance from prior technology waves — the spreadsheet reality, the ERP customizations, the SaaS fragmentation — or the shadow workflows and undocumented operating logic that determine whether leader practices are sustainable under load over time.
In other words, Kearney is measuring the visible apex. The substrate underneath remains structurally invisible to the methodology.
This matters because it produces a specific kind of false confidence. An organization scoring well on the four pillars can still fail catastrophically when AI investment is layered on top of substrate that was never validated. The four-pillar score will register the procurement function as mature. The substrate will collect the debt anyway. And when the AI initiative stalls — as the overwhelming majority of them still do, as the MIT and McKinsey data on AI pilots and transformations continues to demonstrate — the post-mortem will conclude that the technology was wrong, or the change management was insufficient, or the vendor underperformed. The structural question — can our current operating environment actually support the load we are about to place on it? — will never be asked.
The graph above shows what the compounding actually looks like over time. Each prior technology wave — local databases, ERP customizations, SaaS sprawl — left operational inheritance that the next wave was expected to absorb but did not. Wave 4 lands on top of all three, and Kearney’s $46M per-organization gap is the present-day measurement of that accumulated load. From 2026 forward, the trajectory bifurcates. An organization that continues to ignore the substrate carries the load forward at the documented compounding rate — Kearney’s own ten percent annual figure projects a $67M to $75M gap per organization by 2030. An organization that recognizes the substrate and acts on it converges toward gap closure as the prior-wave inheritance is resolved. The structural decision happens at the fork point. The diagnostic question is what makes the difference.
The Dual-Pyramid Diagnostic
Yesterday’s post introduced the side-by-side pyramid graphic that emerged from the Nico Bac and Jason Busch orchestration debate. The graphic shows the same five-layer technology stack twice. On the left, the stack as it is typically described — a broad foundation of inherited systems supporting a narrowing apex of current AI investment. On the right, the same stack as it actually behaves under load — current investment occupying an enormous surface area at the top, and the substrate beneath narrowed to a single point.
The Kearney study describes the left pyramid. The four pillars sit at the apex. The stable foundation is assumed. The road map to closing the gap is to move up the pillars — better team capabilities, better demand shaping, better category strategies, better supplier programs.
The right pyramid is the structural reality the Kearney methodology cannot see. Heavy Wave 4 investment — AI agents, orchestration platforms, governance frameworks, charters — now rests on a narrowing, unresolved substrate of legacy systems, shadow workflows, and undocumented operating logic from the previous three technology waves. That inversion is what produces the compounding performance gap Kearney is documenting. Top-tier organizations are not just better at the four pillars. They are operating on substrate conditions that can actually carry the load the pillars are placing on them.
This is the diagnostic distinction. Kearney’s excellence model identifies what separates leaders from everyone else. The pyramid diagnostic identifies why the gap compounds — and why most organizations cannot close it by improving the pillars alone.
Why the Convergence Across the Major Firms Matters
There is now sufficient evidence to make a structural claim with confidence. Every major consulting firm publishing significant 2026 research on AI, procurement, and enterprise transformation has independently arrived at the same diagnostic conclusion from a different analytical path. The vocabulary differs. The framing differs. The methodology differs. The underlying observation is identical: the variable that determines whether AI investment produces compounding returns or compounding failure is not the technology, not the strategy, not the vendor selection. It is the substrate condition the technology is being deployed into.
That convergence is what makes the Kearney study more important than its individual findings. The report is not just a benchmarking exercise. It is one node in a now-visible pattern of independent firms arriving at the substrate diagnosis without yet sharing a diagnostic framework for what they are observing.
The frameworks I have been documenting in this archive since 1998 — Implementation Physics™, the Compounding Technology Shadow Wave™, Hansen Strand Commonality™, and the Phase 0™ Diagnostic — provide that framework. The eighteen-year contemporaneous archive provides the longitudinal evidence. The dual-pyramid graphic provides the visual language. The question Kearney’s methodology cannot answer is the question Phase 0™ exists to answer.
What This Means for Procurement Leaders Reading the Kearney Report
For boards, CFOs, and CPOs, the right response to the Kearney study is not to reject it. The benchmarking work is valuable and the longitudinal data is real. The right response is to reposition it.
Treat the Kearney AEP as what it is — a Wave 4 capability benchmark that tells you how your visible procurement practices compare to the leader template. Use the four pillars to identify capability gaps that need to be addressed at the operational level. Use the forty-six-million-dollar gap as the executive language that converts substrate conversations into board conversations.
But pair the benchmark with a Phase 0™ Diagnostic. The benchmark tells you where your procurement function sits on the capability maturity curve. The diagnostic tells you whether your current operating environment can support the load you are about to place on it when you act on the benchmark’s recommendations. Without the diagnostic, the benchmark identifies improvements your substrate cannot sustain. With the diagnostic, the benchmark becomes actionable in a structurally defensible way.
Kearney shows the performance gap. The pyramids show why the gap compounds. Phase 0™ shows how to close it.
The Bottom Line
The most important sentence in the Kearney 2026 AEP study is not the one in the press release. It is the implicit observation that AI does not normalize organizational performance — it magnifies preexisting organizational conditions. That observation is exactly the opposite of the AI marketing narrative the industry has been operating under for the last three years. And it aligns precisely with what this archive has been documenting for eighteen years: AI does not break broken processes. It perfects them.
The Kearney report is evidence for the substrate thesis, not evidence against it. The convergence across the major consulting firms is now sufficient that the conversation no longer needs to be defended. It needs to be named. The work of naming the underlying pattern that every major firm is independently circling — that is the work the next chapter of this archive is going to do.
The reports describe the gap. The pyramids name the cause. Phase 0™ is the diagnostic that runs before the next major commitment is made.
The Compounding Technology Shadow Wave™ trilogy executive summaries are available at procureinsights.com. The Phase 0™ Diagnostic — for organizations preparing to commit further AI or transformation investment — is at hansenprocurement.com/where-does-your-organization-sit-right-now/.
Source: Kearney 2026 Assessment of Excellence in Procurement study.
This post was developed through the ARA™ RAM 2025™ multimodel validation framework. Five independent models reviewed the Kearney study and the dual-pyramid diagnostic prior to publication, including the model operating at the publication-layer synthesis function. All five converged on the same structural diagnosis from independent analytical paths. That internal convergence mirrors the external convergence across the major consulting firms documented in the body of this post — the same substrate diagnosis emerging reliably whether the analytical work is done by Kearney, McKinsey, Bain, KPMG, BCG, Accenture, IBM, or by five independent models reviewing those firms in parallel. The convergence is the evidence.
The Kearney $46 Million Gap — and the Structural Question Their Excellence Model Cannot Answer
Posted on May 16, 2026
0
Procurement Insights · May 16, 2026
Kearney just published the 2026 edition of its Assessment of Excellence in Procurement study — a benchmarking exercise the firm has run since 1992, now drawing on five years of longitudinal data across 729 companies. The headline finding is sharp and executive-ready: a ten percent annual performance gap between top-tier and mid-tier procurement organizations now compounds into a forty-six-million-dollar difference in realized value from the same external spend base. AI, in Kearney’s framing, is accelerating that divide rather than narrowing it.
Read carefully, the report is more important than its press release lets on. Not because it is wrong — it is largely correct on the diagnosis — but because it is the latest in a now-undeniable series of major firm publications that have all converged on the same structural observation from different analytical paths. McKinsey on organizational restructuring. Bain on systems, data, and governance readiness. KPMG on why returns vary so widely. BCG on managing AI as a coworker rather than a tool. Accenture on rebuilding the platform. IBM on operating models for autonomous systems. And now Kearney on the widening compounding gap between leaders and laggards.
Every major consulting firm publishing in 2026 has independently identified the same problem. None of them has yet named what they are collectively pointing at.
What the Kearney Study Gets Right
There are three observations in the Kearney report that deserve to be taken seriously.
The first is the compounding-performance observation itself. Mature procurement organizations are pulling further ahead. Weaker organizations are falling further behind. AI is widening rather than closing the spread. That finding is structurally significant because it contradicts the dominant marketing narrative around enterprise AI — the narrative that says AI democratizes capability and lifts the entire procurement function. Kearney’s longitudinal data says the opposite. AI is not equalizing performance. It is amplifying the operating conditions already present inside the enterprise.
The second observation is the framing of procurement excellence as enterprise coordination capability rather than transactional sourcing efficiency. Kearney’s four-pillar model — Team, Demand, Category, Supplier — moves beyond the traditional cost-reduction narrative toward something more structurally significant. The pillars are about how procurement coordinates with the rest of the enterprise, not just about how procurement extracts value from suppliers. That framing is more sophisticated than most procurement benchmarking work, and it is closer to a structural diagnosis than to a capability inventory.
The third observation is the readiness divergence the report documents implicitly. Organizations with mature capabilities are converting AI investments into compounding gains. Organizations without those capabilities are not. That is, in substance, the same readiness-convergence pattern now appearing across MIT, Stanford HAI, Gartner, McKinsey, BCG, and the rest of the major consulting and academic literature.
The Kearney report is evidence that the readiness-as-bottleneck thesis has become the dominant analytical frame in 2026 procurement and enterprise transformation research. That is a significant moment, and the report deserves credit for documenting it.
What the Kearney Study Cannot Answer
The limitation is structural, not methodological. The Kearney study measures procurement performance under load. It does not diagnose the conditions producing the load-bearing behavior itself.
The four pillars sit entirely above the waterline. They evaluate how procurement behaves, what tools and analytics it uses, how well it engages the business and suppliers. They do not interrogate the unresolved inheritance from prior technology waves — the spreadsheet reality, the ERP customizations, the SaaS fragmentation — or the shadow workflows and undocumented operating logic that determine whether leader practices are sustainable under load over time.
In other words, Kearney is measuring the visible apex. The substrate underneath remains structurally invisible to the methodology.
This matters because it produces a specific kind of false confidence. An organization scoring well on the four pillars can still fail catastrophically when AI investment is layered on top of substrate that was never validated. The four-pillar score will register the procurement function as mature. The substrate will collect the debt anyway. And when the AI initiative stalls — as the overwhelming majority of them still do, as the MIT and McKinsey data on AI pilots and transformations continues to demonstrate — the post-mortem will conclude that the technology was wrong, or the change management was insufficient, or the vendor underperformed. The structural question — can our current operating environment actually support the load we are about to place on it? — will never be asked.
The graph above shows what the compounding actually looks like over time. Each prior technology wave — local databases, ERP customizations, SaaS sprawl — left operational inheritance that the next wave was expected to absorb but did not. Wave 4 lands on top of all three, and Kearney’s $46M per-organization gap is the present-day measurement of that accumulated load. From 2026 forward, the trajectory bifurcates. An organization that continues to ignore the substrate carries the load forward at the documented compounding rate — Kearney’s own ten percent annual figure projects a $67M to $75M gap per organization by 2030. An organization that recognizes the substrate and acts on it converges toward gap closure as the prior-wave inheritance is resolved. The structural decision happens at the fork point. The diagnostic question is what makes the difference.
The Dual-Pyramid Diagnostic
Yesterday’s post introduced the side-by-side pyramid graphic that emerged from the Nico Bac and Jason Busch orchestration debate. The graphic shows the same five-layer technology stack twice. On the left, the stack as it is typically described — a broad foundation of inherited systems supporting a narrowing apex of current AI investment. On the right, the same stack as it actually behaves under load — current investment occupying an enormous surface area at the top, and the substrate beneath narrowed to a single point.
The Kearney study describes the left pyramid. The four pillars sit at the apex. The stable foundation is assumed. The road map to closing the gap is to move up the pillars — better team capabilities, better demand shaping, better category strategies, better supplier programs.
The right pyramid is the structural reality the Kearney methodology cannot see. Heavy Wave 4 investment — AI agents, orchestration platforms, governance frameworks, charters — now rests on a narrowing, unresolved substrate of legacy systems, shadow workflows, and undocumented operating logic from the previous three technology waves. That inversion is what produces the compounding performance gap Kearney is documenting. Top-tier organizations are not just better at the four pillars. They are operating on substrate conditions that can actually carry the load the pillars are placing on them.
This is the diagnostic distinction. Kearney’s excellence model identifies what separates leaders from everyone else. The pyramid diagnostic identifies why the gap compounds — and why most organizations cannot close it by improving the pillars alone.
Why the Convergence Across the Major Firms Matters
There is now sufficient evidence to make a structural claim with confidence. Every major consulting firm publishing significant 2026 research on AI, procurement, and enterprise transformation has independently arrived at the same diagnostic conclusion from a different analytical path. The vocabulary differs. The framing differs. The methodology differs. The underlying observation is identical: the variable that determines whether AI investment produces compounding returns or compounding failure is not the technology, not the strategy, not the vendor selection. It is the substrate condition the technology is being deployed into.
That convergence is what makes the Kearney study more important than its individual findings. The report is not just a benchmarking exercise. It is one node in a now-visible pattern of independent firms arriving at the substrate diagnosis without yet sharing a diagnostic framework for what they are observing.
The frameworks I have been documenting in this archive since 1998 — Implementation Physics™, the Compounding Technology Shadow Wave™, Hansen Strand Commonality™, and the Phase 0™ Diagnostic — provide that framework. The eighteen-year contemporaneous archive provides the longitudinal evidence. The dual-pyramid graphic provides the visual language. The question Kearney’s methodology cannot answer is the question Phase 0™ exists to answer.
What This Means for Procurement Leaders Reading the Kearney Report
For boards, CFOs, and CPOs, the right response to the Kearney study is not to reject it. The benchmarking work is valuable and the longitudinal data is real. The right response is to reposition it.
Treat the Kearney AEP as what it is — a Wave 4 capability benchmark that tells you how your visible procurement practices compare to the leader template. Use the four pillars to identify capability gaps that need to be addressed at the operational level. Use the forty-six-million-dollar gap as the executive language that converts substrate conversations into board conversations.
But pair the benchmark with a Phase 0™ Diagnostic. The benchmark tells you where your procurement function sits on the capability maturity curve. The diagnostic tells you whether your current operating environment can support the load you are about to place on it when you act on the benchmark’s recommendations. Without the diagnostic, the benchmark identifies improvements your substrate cannot sustain. With the diagnostic, the benchmark becomes actionable in a structurally defensible way.
Kearney shows the performance gap. The pyramids show why the gap compounds. Phase 0™ shows how to close it.
The Bottom Line
The most important sentence in the Kearney 2026 AEP study is not the one in the press release. It is the implicit observation that AI does not normalize organizational performance — it magnifies preexisting organizational conditions. That observation is exactly the opposite of the AI marketing narrative the industry has been operating under for the last three years. And it aligns precisely with what this archive has been documenting for eighteen years: AI does not break broken processes. It perfects them.
The Kearney report is evidence for the substrate thesis, not evidence against it. The convergence across the major consulting firms is now sufficient that the conversation no longer needs to be defended. It needs to be named. The work of naming the underlying pattern that every major firm is independently circling — that is the work the next chapter of this archive is going to do.
The reports describe the gap. The pyramids name the cause. Phase 0™ is the diagnostic that runs before the next major commitment is made.
The Compounding Technology Shadow Wave™ trilogy executive summaries are available at procureinsights.com. The Phase 0™ Diagnostic — for organizations preparing to commit further AI or transformation investment — is at hansenprocurement.com/where-does-your-organization-sit-right-now/.
Source: Kearney 2026 Assessment of Excellence in Procurement study.
This post was developed through the ARA™ RAM 2025™ multimodel validation framework. Five independent models reviewed the Kearney study and the dual-pyramid diagnostic prior to publication, including the model operating at the publication-layer synthesis function. All five converged on the same structural diagnosis from independent analytical paths. That internal convergence mirrors the external convergence across the major consulting firms documented in the body of this post — the same substrate diagnosis emerging reliably whether the analytical work is done by Kearney, McKinsey, Bain, KPMG, BCG, Accenture, IBM, or by five independent models reviewing those firms in parallel. The convergence is the evidence.
Hansen Models™ · Implementation Physics™ · Compounding Technology Shadow Wave™ · Phase 0™ · Hansen Fit Score™ · Hansen Strand Commonality™
-30-
Share this:
Related