Teaser: Most readiness assessments stop at alignment. That is exactly where the failure begins.
By Jon W. Hansen | Procurement Insights | March 2026
Five LinkedIn posts appeared in my feed this morning. Read in sequence, they describe exactly how the industry is walking into the same wall it has hit in every technology era for the past thirty years — only this time at a speed and scale that makes the impact significantly worse.
The first was Andreas Horn, Head of AIOps at IBM, clarifying that 95% of what LinkedIn calls “AI Agents” is not actually agentic AI at all — it is LLM workflows or RAG implementations masquerading as agents. The market cannot agree on what it is deploying, let alone whether the organization is ready to receive it.
The second was Femke Plantinga of Weaviate, explaining why most RAG systems fail on complex queries: naive RAG follows a fixed path, and if the initial retrieval misses the context, the entire system collapses. The technical implementation fails not because the technology is wrong but because the underlying reasoning structure was not designed for the conditions it encountered.
The third was Asmaa Gad of Supply Chain AI Pro, mapping the AI toolkit for supply chain and procurement professionals — visibility tools, automation platforms, risk systems, intelligence layers. Pick the ones that solve your biggest headaches today. Not a word about whether the processes those tools are being deployed into are structurally capable of producing the outcomes the tools are designed to enable.
The fourth was Prashant Rathi, Principal Architect at McKinsey, listing the five pillars of AI infrastructure that determine production success: compute, models, data, applications, distribution. His conclusion: “AI is not a model problem. It is a foundation problem.” His five pillars cover every layer of the technology stack. Process structural integrity — whether the organizational processes the AI will operate within are behaviorally aligned with what the AI is expected to produce — does not appear in any of them.
The fifth was Darlene Newman, AI Strategy Advisor, responding to the McKinsey AI platform breach. McKinsey had publicly announced 30,000 employees using AI and 25,000 agents running across the firm. Their internal AI platform was then breached in two hours via a decade-old SQL injection — no credentials, no insider access — giving full read/write access to millions of internal consultant conversations, sitting undetected for two years.
Her assessment: “You can’t govern what you deployed before you truly understood it.”
McKinsey’s governance framework was not the problem. The process structural integrity of the deployment environment was — and no governance review was designed to find it.
Five posts. Five different authors. Five different vantage points. All of them circling the same gap without naming it.
The gap is not the technology. The gap is not the governance. The gap is not the model selection, the data quality, the infrastructure stack, or the tool choice.
The gap is process structural integrity. And it is the one variable that every framework in all five posts assumes away.
Something has happened to the word “readiness” in enterprise technology and procurement.
It has been domesticated.
Over the past decade, “readiness” has drifted from its original meaning — is this organization structurally capable of absorbing what is about to be deployed into it? — to a softer, safer, and ultimately insufficient substitute: are we governing our current system well enough to proceed?
Those are not the same question. And the gap between them is where 75–85% of implementations disappear.
What the Market Means by Readiness
When most organizations conduct a readiness assessment today, they are asking a set of questions that stays entirely inside the existing operating model:
Are stakeholders aligned? Are decision rights clarified? Is change management in place? Are governance controls strengthened?
All of that is genuinely important work. None of it tests the one thing that determines whether the initiative holds or reverts.
It assumes the underlying process is already viable.
That assumption is almost never tested. And it is almost always wrong.
What Readiness Actually Means
Readiness is not about preparing people to follow a process.
It is about determining whether the process itself will hold under real-world conditions.
That distinction — between preparing people and verifying the process — is the gap the industry has been papering over for thirty years. And it is the gap that produces the same failure pattern across ERP, eSourcing, P2P, SRM, CLM, and now AI: the governance framework is in place, the stakeholders are aligned, the change management plan is approved, and then real-world conditions arrive and the system behaves in a way nobody anticipated.
Because nobody tested whether the process would actually hold.
In the late 1990s, I was engaged to address a procurement performance failure at the Department of National Defence. Delivery performance had collapsed to 51%. The standard readiness assessment would have reviewed stakeholder alignment, governance controls, and change management protocols — and found them largely intact.
I asked one question instead.
What time of day do orders come in?
The answer exposed a behavioral misalignment that no governance framework had ever thought to test. Service technicians were batching orders at 4pm — driven by their own incentive structure, not by the process design. The process said one thing. The actual behavior said something entirely different. And the gap between them was costing the organization its delivery performance, its pricing integrity, and its supplier relationships.
The governance framework was not the problem. The process structural integrity was.
The Distinction the Industry Needs
Governance architecture defines how things are supposed to work. It documents decision rights, escalation paths, accountability structures, and approval processes. It describes the intended system.
Process structural integrity reveals how things actually work under pressure — whether the human agents, AI agents, and external stakeholders whose behavior determines outcomes are actually operating in alignment with the designed process, or whether competing incentives, informal authority structures, and habitual workarounds have already compromised process integrity before the technology arrives.
Most of the market is asking: “Are we ready to implement this system?”
The question that determines outcomes is: “Will the system we are about to implement survive contact with reality?”
That shift moves the conversation from compliance to capability, from alignment to durability, from design to execution under stress.
“Don’t confuse governance with process integrity.” — Jon Hansen, Hansen Models™
Why Well-Governed Transformations Still Fail
This explains something that has puzzled the industry for three decades: why do transformations with strong governance, thorough change management, and senior executive sponsorship still fail at the same rate as those without?
Because governance was layered onto a process that could not sustain the behavior required of it.
The governance describes the intended system. The process structural integrity determines whether the actual system — the one shaped by behavioral reality, not documentation — can produce the outcomes the governance is designed to protect.
When they diverge, the governance reports green and the outcomes deteriorate. Exactly as they did at the DND. Exactly as they do in the 75–85% of implementations that the archive has been documenting across seven consecutive technology eras.
The readiness assessment that stops at alignment never sees the divergence. Phase 0™ is designed to find it — before the implementation begins, not after the reversion confirms it.
The AI Extension
This has always mattered. In the AI era it is critical.
An autonomous system deployed into a process whose structural integrity has never been verified will not correct the misalignment it finds. It will accelerate it.
The behavioral conditions that determine whether a procurement process functions as designed — the timing of orders, the incentive structures of the agents who execute it, the informal authority patterns that override formal governance — will be inherited by the AI system deployed to automate or augment that process. The AI will learn from what exists. Reinforce it. Scale it.
Most AI readiness frameworks ask whether the data is clean, whether the governance is in place, whether the stakeholders are prepared. None of them ask the prior question:
Is the process the AI is being deployed into structurally capable of producing the outcomes the AI is expected to deliver?
That is the process structural integrity question. And it is the one that sits between AI ambition and AI outcome — in every organization, in every sector, in every technology era the archive has documented.
Redefining Readiness
This is not a rejection of the readiness work the market is doing. Stakeholder alignment, decision rights clarity, change management, governance controls — all of it matters. All of it is necessary.
None of it is sufficient.
Readiness — properly defined — is the verified answer to a single question: will the system we are about to deploy into survive contact with reality?
Not the designed system. The actual system. The one that operates through human agents who have their own incentive structures, through AI agents that will inherit whatever behavioral conditions they find, and through external stakeholders whose own process integrity determines whether outcomes materialize at the organizational boundary.
Phase 0™ diagnoses whether that actual system is capable of sustaining what is about to be asked of it — literally rather than conceptually, before the pressure arrives, not after it exposes the gap.
Most frameworks describe readiness. Phase 0™ measures whether it actually exists.
Great technology will never overcome a lack of process integrity, resulting in poor governance.
That is not a warning about technology. It is a statement about sequence.
Get the sequence right — process integrity first, governance second, technology third — and the readiness conversation stops being about whether the organization can survive the implementation.
It becomes about how much value the implementation can produce.
For the full framework on what the Hansen Fit Score™ diagnoses across practitioners, providers, and the C-suite — and why process structural integrity is the layer every capability assessment misses — the previous post in this series is here:
📖 The Hansen Fit Score™ — What It Does, Who It’s For, and Why It Matters Now
For the documented evidence that the failure rate has not moved in 20 years — including the 2020 inflection point when the technology excuse finally expired — the archive post is here:
📖 20 Years of Quadrants, Waves, and Maps — Same 75–80% Failure Rate
Your Readiness Check
Identify: Name the last major technology initiative your organization implemented or is currently implementing.
Check: Did the readiness assessment conducted before that initiative test whether the underlying process would hold under real-world conditions — or did it test whether stakeholders were aligned, decision rights were clarified, and governance controls were in place?
Decide: If the readiness assessment stopped at alignment — the process structural integrity was never verified. The governance described the intended system. Whether the actual system can sustain the behavior required of it remains untested.
Act: Ask your team the question that governance never asks: “What time do our orders come in?” Whatever your equivalent of that question is — the one that reveals the gap between how the process is designed to work and how people actually behave within it — ask it. The answer will tell you whether your readiness assessment found the right layer.
Three Questions Your AI Readiness Assessment Forgot to Ask
Most AI readiness assessments ask the right questions about the wrong layer. Before your next AI initiative — or before you scale the one already running — ask these three instead:
1. Is the process the AI is entering behaviorally aligned — or just documentarily compliant? Not “do we have a process?” but “do people actually operate within it, or around it?” The AI will inherit whichever one is true. It will not correct the misalignment. It will scale it.
2. Does a named individual hold pre-authorized decision authority to act on what the AI surfaces — within the window the signal requires? Visibility without decision authority is not readiness. It is awareness. If the AI flags a risk or an opportunity and the governance cycle outlasts the window to act, the AI produced a cost, not a value.
3. Has the process been stress-tested against real-world conditions — not against its own documentation? What time do your orders come in? Whatever your equivalent of that question is — the one that reveals the gap between designed behavior and actual behavior — ask it before the AI is deployed. The answer will tell you whether your process can sustain what the AI is about to ask of it.
If you cannot answer all three immediately, without reviewing documentation — Phase 0™ is the next conversation. Book a 30-minute readiness conversation: calendly.com/jon-toq/30min
Jon W. Hansen is the founder of Hansen Models™ and Procurement Insights, an independent procurement technology research and advisory platform whose living archive — now spanning 18 years, 3,300+ published documents, and still recording — is the evidentiary foundation no analyst firm has the independence to replicate. The Hansen Fit Score™ (HFS™), Phase 0™ Organizational Readiness Diagnostic, and RAM 2025™ Multimodel Validation Framework are proprietary frameworks developed and maintained with zero vendor sponsorships and zero referral revenue.
© 2026 Jon W. Hansen | Procurement Insights | hansenprocurement.com | hpt@hansenprocurement.com
Ready to run the diagnostic? Book a 30-minute readiness conversation: calendly.com/jon-toq/30min
Hansen Fit Score™ Annual Subscription — Tier 1: INSIGHT: payhip.com/b/qm5K6
© 2026 Jon W. Hansen | Procurement Insights | hansenprocurement.com | hpt@hansenprocurement.com
-30-
AI Doesn’t Break Broken Processes. It Perfects Them.
Posted on March 25, 2026
0
Teaser: Most readiness assessments stop at alignment. That is exactly where the failure begins.
By Jon W. Hansen | Procurement Insights | March 2026
Five LinkedIn posts appeared in my feed this morning. Read in sequence, they describe exactly how the industry is walking into the same wall it has hit in every technology era for the past thirty years — only this time at a speed and scale that makes the impact significantly worse.
The first was Andreas Horn, Head of AIOps at IBM, clarifying that 95% of what LinkedIn calls “AI Agents” is not actually agentic AI at all — it is LLM workflows or RAG implementations masquerading as agents. The market cannot agree on what it is deploying, let alone whether the organization is ready to receive it.
The second was Femke Plantinga of Weaviate, explaining why most RAG systems fail on complex queries: naive RAG follows a fixed path, and if the initial retrieval misses the context, the entire system collapses. The technical implementation fails not because the technology is wrong but because the underlying reasoning structure was not designed for the conditions it encountered.
The third was Asmaa Gad of Supply Chain AI Pro, mapping the AI toolkit for supply chain and procurement professionals — visibility tools, automation platforms, risk systems, intelligence layers. Pick the ones that solve your biggest headaches today. Not a word about whether the processes those tools are being deployed into are structurally capable of producing the outcomes the tools are designed to enable.
The fourth was Prashant Rathi, Principal Architect at McKinsey, listing the five pillars of AI infrastructure that determine production success: compute, models, data, applications, distribution. His conclusion: “AI is not a model problem. It is a foundation problem.” His five pillars cover every layer of the technology stack. Process structural integrity — whether the organizational processes the AI will operate within are behaviorally aligned with what the AI is expected to produce — does not appear in any of them.
The fifth was Darlene Newman, AI Strategy Advisor, responding to the McKinsey AI platform breach. McKinsey had publicly announced 30,000 employees using AI and 25,000 agents running across the firm. Their internal AI platform was then breached in two hours via a decade-old SQL injection — no credentials, no insider access — giving full read/write access to millions of internal consultant conversations, sitting undetected for two years.
Her assessment: “You can’t govern what you deployed before you truly understood it.”
McKinsey’s governance framework was not the problem. The process structural integrity of the deployment environment was — and no governance review was designed to find it.
Five posts. Five different authors. Five different vantage points. All of them circling the same gap without naming it.
The gap is not the technology. The gap is not the governance. The gap is not the model selection, the data quality, the infrastructure stack, or the tool choice.
The gap is process structural integrity. And it is the one variable that every framework in all five posts assumes away.
Something has happened to the word “readiness” in enterprise technology and procurement.
It has been domesticated.
Over the past decade, “readiness” has drifted from its original meaning — is this organization structurally capable of absorbing what is about to be deployed into it? — to a softer, safer, and ultimately insufficient substitute: are we governing our current system well enough to proceed?
Those are not the same question. And the gap between them is where 75–85% of implementations disappear.
What the Market Means by Readiness
When most organizations conduct a readiness assessment today, they are asking a set of questions that stays entirely inside the existing operating model:
Are stakeholders aligned? Are decision rights clarified? Is change management in place? Are governance controls strengthened?
All of that is genuinely important work. None of it tests the one thing that determines whether the initiative holds or reverts.
It assumes the underlying process is already viable.
That assumption is almost never tested. And it is almost always wrong.
What Readiness Actually Means
Readiness is not about preparing people to follow a process.
It is about determining whether the process itself will hold under real-world conditions.
That distinction — between preparing people and verifying the process — is the gap the industry has been papering over for thirty years. And it is the gap that produces the same failure pattern across ERP, eSourcing, P2P, SRM, CLM, and now AI: the governance framework is in place, the stakeholders are aligned, the change management plan is approved, and then real-world conditions arrive and the system behaves in a way nobody anticipated.
Because nobody tested whether the process would actually hold.
In the late 1990s, I was engaged to address a procurement performance failure at the Department of National Defence. Delivery performance had collapsed to 51%. The standard readiness assessment would have reviewed stakeholder alignment, governance controls, and change management protocols — and found them largely intact.
I asked one question instead.
What time of day do orders come in?
The answer exposed a behavioral misalignment that no governance framework had ever thought to test. Service technicians were batching orders at 4pm — driven by their own incentive structure, not by the process design. The process said one thing. The actual behavior said something entirely different. And the gap between them was costing the organization its delivery performance, its pricing integrity, and its supplier relationships.
The governance framework was not the problem. The process structural integrity was.
The Distinction the Industry Needs
Governance architecture defines how things are supposed to work. It documents decision rights, escalation paths, accountability structures, and approval processes. It describes the intended system.
Process structural integrity reveals how things actually work under pressure — whether the human agents, AI agents, and external stakeholders whose behavior determines outcomes are actually operating in alignment with the designed process, or whether competing incentives, informal authority structures, and habitual workarounds have already compromised process integrity before the technology arrives.
Most of the market is asking: “Are we ready to implement this system?”
The question that determines outcomes is: “Will the system we are about to implement survive contact with reality?”
That shift moves the conversation from compliance to capability, from alignment to durability, from design to execution under stress.
Why Well-Governed Transformations Still Fail
This explains something that has puzzled the industry for three decades: why do transformations with strong governance, thorough change management, and senior executive sponsorship still fail at the same rate as those without?
Because governance was layered onto a process that could not sustain the behavior required of it.
The governance describes the intended system. The process structural integrity determines whether the actual system — the one shaped by behavioral reality, not documentation — can produce the outcomes the governance is designed to protect.
When they diverge, the governance reports green and the outcomes deteriorate. Exactly as they did at the DND. Exactly as they do in the 75–85% of implementations that the archive has been documenting across seven consecutive technology eras.
The readiness assessment that stops at alignment never sees the divergence. Phase 0™ is designed to find it — before the implementation begins, not after the reversion confirms it.
The AI Extension
This has always mattered. In the AI era it is critical.
An autonomous system deployed into a process whose structural integrity has never been verified will not correct the misalignment it finds. It will accelerate it.
The behavioral conditions that determine whether a procurement process functions as designed — the timing of orders, the incentive structures of the agents who execute it, the informal authority patterns that override formal governance — will be inherited by the AI system deployed to automate or augment that process. The AI will learn from what exists. Reinforce it. Scale it.
Most AI readiness frameworks ask whether the data is clean, whether the governance is in place, whether the stakeholders are prepared. None of them ask the prior question:
Is the process the AI is being deployed into structurally capable of producing the outcomes the AI is expected to deliver?
That is the process structural integrity question. And it is the one that sits between AI ambition and AI outcome — in every organization, in every sector, in every technology era the archive has documented.
Redefining Readiness
This is not a rejection of the readiness work the market is doing. Stakeholder alignment, decision rights clarity, change management, governance controls — all of it matters. All of it is necessary.
None of it is sufficient.
Readiness — properly defined — is the verified answer to a single question: will the system we are about to deploy into survive contact with reality?
Not the designed system. The actual system. The one that operates through human agents who have their own incentive structures, through AI agents that will inherit whatever behavioral conditions they find, and through external stakeholders whose own process integrity determines whether outcomes materialize at the organizational boundary.
Phase 0™ diagnoses whether that actual system is capable of sustaining what is about to be asked of it — literally rather than conceptually, before the pressure arrives, not after it exposes the gap.
Most frameworks describe readiness. Phase 0™ measures whether it actually exists.
Great technology will never overcome a lack of process integrity, resulting in poor governance.
That is not a warning about technology. It is a statement about sequence.
Get the sequence right — process integrity first, governance second, technology third — and the readiness conversation stops being about whether the organization can survive the implementation.
It becomes about how much value the implementation can produce.
For the full framework on what the Hansen Fit Score™ diagnoses across practitioners, providers, and the C-suite — and why process structural integrity is the layer every capability assessment misses — the previous post in this series is here:
📖 The Hansen Fit Score™ — What It Does, Who It’s For, and Why It Matters Now
For the documented evidence that the failure rate has not moved in 20 years — including the 2020 inflection point when the technology excuse finally expired — the archive post is here:
📖 20 Years of Quadrants, Waves, and Maps — Same 75–80% Failure Rate
Your Readiness Check
Identify: Name the last major technology initiative your organization implemented or is currently implementing.
Check: Did the readiness assessment conducted before that initiative test whether the underlying process would hold under real-world conditions — or did it test whether stakeholders were aligned, decision rights were clarified, and governance controls were in place?
Decide: If the readiness assessment stopped at alignment — the process structural integrity was never verified. The governance described the intended system. Whether the actual system can sustain the behavior required of it remains untested.
Act: Ask your team the question that governance never asks: “What time do our orders come in?” Whatever your equivalent of that question is — the one that reveals the gap between how the process is designed to work and how people actually behave within it — ask it. The answer will tell you whether your readiness assessment found the right layer.
Three Questions Your AI Readiness Assessment Forgot to Ask
Most AI readiness assessments ask the right questions about the wrong layer. Before your next AI initiative — or before you scale the one already running — ask these three instead:
1. Is the process the AI is entering behaviorally aligned — or just documentarily compliant? Not “do we have a process?” but “do people actually operate within it, or around it?” The AI will inherit whichever one is true. It will not correct the misalignment. It will scale it.
2. Does a named individual hold pre-authorized decision authority to act on what the AI surfaces — within the window the signal requires? Visibility without decision authority is not readiness. It is awareness. If the AI flags a risk or an opportunity and the governance cycle outlasts the window to act, the AI produced a cost, not a value.
3. Has the process been stress-tested against real-world conditions — not against its own documentation? What time do your orders come in? Whatever your equivalent of that question is — the one that reveals the gap between designed behavior and actual behavior — ask it before the AI is deployed. The answer will tell you whether your process can sustain what the AI is about to ask of it.
If you cannot answer all three immediately, without reviewing documentation — Phase 0™ is the next conversation. Book a 30-minute readiness conversation: calendly.com/jon-toq/30min
Jon W. Hansen is the founder of Hansen Models™ and Procurement Insights, an independent procurement technology research and advisory platform whose living archive — now spanning 18 years, 3,300+ published documents, and still recording — is the evidentiary foundation no analyst firm has the independence to replicate. The Hansen Fit Score™ (HFS™), Phase 0™ Organizational Readiness Diagnostic, and RAM 2025™ Multimodel Validation Framework are proprietary frameworks developed and maintained with zero vendor sponsorships and zero referral revenue.
© 2026 Jon W. Hansen | Procurement Insights | hansenprocurement.com | hpt@hansenprocurement.com
Ready to run the diagnostic? Book a 30-minute readiness conversation: calendly.com/jon-toq/30min
Hansen Fit Score™ Annual Subscription — Tier 1: INSIGHT: payhip.com/b/qm5K6
© 2026 Jon W. Hansen | Procurement Insights | hansenprocurement.com | hpt@hansenprocurement.com
-30-
Share this:
Related