Max Henry’s prediction validates what I’ve been documenting for 26 years—and building since 1998.
The 2008 Question That Predicted Today’s Crisis
Seventeen years ago, I posed a question to the procurement community:
“Does your enterprise have a ‘Digital Nervous System’?”
I was referencing Bill Gates’ 1999 vision from Business @ The Speed of Thought—real-time collaborative intelligence infrastructure that would enable organizations to adapt with the speed and agility of a living organism.
The answer then was no.
In 2025, with enterprises rushing to deploy AI agents, the answer is still no.
And that’s why Max Henry’s recent prediction will prove accurate: 90% of AI agent providers in logistics will disappear by 2026.
Not because the technology is bad. Because organizations are deploying it without the infrastructure to govern it.
What I Saw in 2008 (That’s Playing Out Today)
In that 2008 post, I explained why organizations couldn’t achieve the “Digital Nervous System” Gates envisioned. The problem wasn’t technology limitations—it was architectural mismatch.
I described two competing modeling approaches:
Equation-Based Modeling (What Most Vendors Built):
- Attempts to confine diverse stakeholders into single, definable “static” processes
- Works for structured, quantifiable problems (like finance)
- Fails for dynamic, collaborative challenges (like procurement and supply chain)
- Produces “near real-time” at best—not actual real-time collaboration
Agent-Based Modeling (What I’d Been Building Since 1998):
- Understands unique operating attributes of diverse stakeholders first
- Links disparate attributes through advanced algorithms
- Produces reliable, real-world “collaborative” outcomes
- Achieves actual real-time synchronization, not approximation
I also wrote this:
“Web 2.0 represents the natural evolution of the agent-based model. Development efforts are well underway in defining a viable Web 4.0 model.”
That was 2008. I was predicting Web 4.0—intelligent, autonomous, collaborative systems.
That’s what agentic AI represents today.
I also identified why organizations were stuck:
“Because many vendors such as Oracle, SAP and Ariba have made a substantial investment in their equation-based models… organizations that have made an equally substantial investment in their current ERP platforms are ‘stuck’ in terms of working within what is quickly becoming an antiquated framework.”
Seventeen years later, the same pattern repeats.
Only now it’s not Oracle and SAP—it’s the vendors wrapping public LLMs and calling them “AI agents.”
[Read my full 2008 post: https://procureinsights.com/2008/04/05/do-you-practice-business-the-speed-of-thought-does-your-enterprise-have-a-digital-nervous-system-if-not-why/]
Max Henry’s 2025 Prediction: The Pattern Repeats
Max Henry recently published an analysis that mirrors what I’ve been documenting for 26 years. His central thesis:
More than 90% of AI agent providers in logistics will disappear by 2026.
His reasoning cuts through the hype with clarity:
Most vendors are selling rebranded workflow automation as “AI agents.” They’re building trigger-based sequences, not genuine agentic intelligence. They don’t own the underlying technology—just wrappers around public LLMs like OpenAI or Anthropic. When foundation model providers raise prices or change capabilities, these vendors’ entire business models collapse.
One commenter on his article captured it perfectly: “99% of solutions are closed AI wrappers on workflows dressed up as agents.”
Another noted that some vendors are literally using n8n (a workflow automation tool) paired with public LLMs, charging enterprise prices for off-the-shelf components they don’t even own.
Max Henry’s conclusion: “AI in logistics is valuable only when built into the core workflow with strategy, scale, and resilience. Everything else is a cash grab agent circus.”
He’s right. But this isn’t new.
The Failure Pattern: 26 Years of Consistent Data
I’ve been tracking implementation failures across procurement and supply chain technology since 1998. The numbers are remarkably consistent:
- Procurement technology implementations: 80% failure rate (documented 2007-2025)
- AI agent logistics providers: 90% predicted failure by 2026 (Max Henry, November 2025)
- Digital transformation initiatives: 70-71% failure rate (Gartner, 2024-2025)
The root cause is always identical: Technology deployed before organizational readiness is assessed.
Organizations rush to adopt—driven by FOMO, vendor hype, and promises of frictionless automation. Consulting firms and technology vendors oversell capabilities. Implementations begin before stakeholders even agree on what “autonomous” or “intelligent” means in their context.
Then reality arrives. “Exceptions” aren’t rare edge cases—they’re hundreds of repeatable scenarios that reveal readiness gaps. The AI throws 40% of tasks back to humans. Finance doesn’t recognize the value Procurement claims. The C-suite questions the ROI.
The technology gets blamed. Practitioners get blamed. Consultants produce reports about “change management failures.”
But the real problem was never acknowledged: No one assessed whether the organization was ready before deployment.
The Exception Handling Myth (And What It Really Reveals)
Max Henry’s article includes one of the most insightful observations I’ve seen about current AI agent deployments:
“AI vendors sell ‘exception handling’ as if exceptions are rare edge cases. In logistics, exceptions are hundreds of repeatable scenarios. You don’t need AI to handle repeatable scenarios. You need proper workflow design.”
This connects directly to what I’ve been calling the “invisible scoreboard” problem.
Exceptions aren’t AI failures. They’re readiness gaps made visible.
When an AI agent throws a task back to a human 40% of the time, that’s not evidence of edge cases or technological limitations. It’s evidence that:
- Processes weren’t documented before automation
- Stakeholders weren’t aligned on decision boundaries
- Governance policies weren’t established
- Compliance requirements weren’t mapped
- The organization wasn’t ready to deploy autonomous agents
The vendors selling these solutions understand this reality. A ProcureTech VP once admitted to me:
“We all take on customers who we know have little chance for success… But if we don’t bring them on, a competitor will.”
It’s not incompetence. It’s the business model.
The 10% Who Will Survive: What They’ll Have in Common
Max Henry identified what the survivors will share. They’ll build:
- Kernel-level applications integrated into core workflows (not fragile wrappers around email threads)
- Governance infrastructure with audit trails, compliance frameworks, and traceability
- Strategic deployment based on understanding actual problems being solved
I’d add a fourth characteristic that Max Henry implied but didn’t name explicitly:
4. Readiness assessment BEFORE deployment
The 10% who survive won’t just build better technology. They’ll deploy it differently. They’ll assess organizational preparedness first:
- Are stakeholders aligned on what “autonomous” means?
- Are processes documented well enough to automate?
- Are governance policies established?
- Are compliance requirements mapped?
- Is the infrastructure ready to handle exceptions intelligently?
- Can the organization actually USE what the vendor is selling?
This is what I formalized as the Hansen Fit Score (HFS): a readiness assessment methodology that measures organizational preparedness across seven dimensions before technology deployment.
HFS prevents the 80-90% failure rate by identifying readiness gaps BEFORE deployment, not after.
The Digital Nervous System Gap: What’s Still Missing
Max Henry’s article exposes a critical infrastructure gap: most AI agent vendors don’t own the technology they’re selling.
This is the same problem I identified in 2008 when I explained why organizations couldn’t achieve Gates’ “Digital Nervous System” vision. Then, organizations were locked into Oracle, SAP, and Ariba’s equation-based architectures. Today, they’re being locked into OpenAI and Anthropic wrapper-based agents.
The lock-in pattern repeats because the architectural approach hasn’t changed.
This is why I’ve been writing about the need for what I call an “AI Operating System”—infrastructure that sits ABOVE the model layer, providing governance, orchestration, and human validation regardless of which foundation models you’re using.
Think of it like Windows for personal computing. Windows didn’t replace the hardware—it provided an operating system that made the hardware accessible and useful. When one hardware component failed or became obsolete, you replaced it. The operating system remained stable.
An AI Operating System provides the same function for generative AI:
- Orchestrates multiple models (no vendor lock-in to single provider’s pricing)
- Provides governance infrastructure (audit trails, bias detection, compliance)
- Integrates human validation (prevents autonomous drift and bias amplification)
- Enables continuous learning (system improves from validated outcomes)
- Remains stable when foundation models change or fail
I first wrote about this need in 2008 as the “Digital Nervous System.” I’ve continued developing the concept through multiple posts:
The need has been clear since 2008. The market is only now recognizing it.
Why the Crash Is Predictable (And Has Happened Before)
Max Henry referenced a commenter’s list of previous “instant revolutions”:
- Dot-com bubble (1995-2000)
- Y2K scare
- Big Data (2010s)
- Blockchain hype
I’d add to that list:
- OCR rebranded as “AI” (early 2010s)
- Robotic Process Automation overselling (mid-2010s)
- Every procurement technology wave I’ve documented since 2007
The pattern repeats because the incentives don’t change:
Vendors profit from hype cycles regardless of implementation success. Consulting firms make money on deployments, not outcomes. When projects fail, they blame practitioners for “not being ready”—even though readiness assessment was never part of the sales process.
As I documented in November 2024: “Is Gartner Blaming Practitioners for Following… Gartner’s Advice?”
The 90% crash Max Henry predicts will follow the same predictable script:
- Vendors oversell autonomous capabilities
- Organizations deploy without readiness assessment
- “Exceptions” reveal readiness gaps at scale
- Promised ROI doesn’t materialize
- Vendors blame “implementation challenges” and “change management”
- Market consolidates catastrophically
- 90% disappear
This is predictable. This is preventable.
What Organizations Should Do NOW
Max Henry offered clear advice: “Do your diligence.”
I’d make it more specific. Before deploying any AI agent solution, organizations should ask three categories of questions:
Readiness Questions (Ask YOURSELF):
- Have we documented the processes we’re trying to automate?
- Are stakeholders aligned on what “autonomous” means in our context?
- Have we established governance policies for autonomous decisions?
- Do we understand compliance requirements for AI-generated actions?
- Can we measure “exception rate” and define what’s acceptable?
- Are we organizationally ready to USE what we’re buying?
Vendor Questions (Ask THEM):
- Do you own the underlying models, or wrap public LLMs?
- What happens to our implementation if OpenAI raises prices 10x?
- Can you provide like-for-like comparisons to workflow automation?
- What’s your actual exception rate in production deployments (not demos)?
- How do you handle compliance and audit requirements?
- Can you demonstrate kernel-level integration, not just email wrappers?
Infrastructure Questions (Ask BOTH):
- Do you provide governance infrastructure (audit trails, bias detection)?
- Can we switch foundation models without rebuilding everything?
- Do you integrate human validation checkpoints, or is it fully autonomous?
- How do you prevent bias creep across autonomous workflows?
- Is this an AI Operating System, or just another vendor-locked tool?
The Path Forward: Readiness First, Technology Second
The AI agent crash is coming. Max Henry’s 90% failure prediction aligns perfectly with 26 years of data showing the same pattern across every technology wave.
But the 10% who survive won’t just have better technology. They’ll have fundamentally different deployment methodology:
- Readiness assessment frameworks (HFS or equivalent) deployed BEFORE technology
- Governance infrastructure (AI Operating System architecture) providing stability
- Human validation integration (collaborative intelligence, not blind automation)
- Model independence (not locked into single vendor’s pricing or capabilities)
- Compliance by design (audit trails and traceability from day one)
Most importantly, they’ll understand what I wrote in 2008 and what Max Henry is saying in 2025:
Real-time collaborative intelligence requires infrastructure, not just technology.
The Question No One Is Asking
Max Henry’s article reminded me of the question I posed in November 2024:
“If someone sells you a boat with a hole in it and the boat sinks, whose fault is it?”
Ninety percent of AI agent vendors are selling boats with holes in them. The holes are called “readiness gaps” and “infrastructure deficits.” The vendors know the holes exist. They sell the boats anyway.
When 90% of these boats sink in 2026, who will receive the blame?
If history is any guide: the practitioners who bought the boats will be blamed, not the vendors who sold them knowing they’d sink.
My 27-Year Journey: From RAM to HFS to AI Operating Systems
This isn’t theoretical for me. I’ve been building toward this solution for 27 years:
1998: RAM (Relational Acquisition Model)
- Developed an agent-based procurement system with SR&ED funding from the Canadian government
- Achieved 97.3% delivery accuracy (up from 51%)
- Reduced administrative overhead from 23 FTEs to 3
- Achieved 23% cost savings
- Proved agent-based modeling works in practice
2008: Digital Nervous System Question
- Identified the infrastructure gap preventing real-time collaborative intelligence
- Predicted Web 4.0 evolution (intelligent, autonomous systems)
- Explained why equation-based architectures fail for dynamic challenges
- Diagnosed the problem 17 years before “agentic AI” became a buzzword
2007-2025: Pattern Documentation
- Tracked 180+ implementations across 26 years
- Documented a consistent 80% failure rate
- Identified root cause: technology before readiness
- Built institutional memory, the industry refuses to acknowledge
2024-2025: AI Operating System Architecture
- Developed multi-model orchestration frameworks
- Built governance infrastructure for autonomous agents
- Integrated human validation as a core component
- Created the Digital Nervous System I predicted in 2008
2025: Hansen Fit Score (HFS)
- Formalized readiness assessment across seven dimensions
- Quantifies organizational preparedness before deployment
- Prevents the 80-90% failure rate through Phase 0 assessment
- Operationalized the “readiness-first” principle
And most recently, I’ve been developing what I’m calling an AI Operating System that I first conceptualized in 1998.
Conclusion: The 2008 Vision Meets 2025 Reality
Seventeen years ago, I asked if organizations had built the “Digital Nervous System” that Bill Gates envisioned.
The answer was no—because the underlying architecture (equation-based modeling) couldn’t deliver what was needed (real-time collaborative intelligence).
Today, Max Henry asks essentially the same question about AI agents: Do organizations have the infrastructure to govern autonomous intelligence?
The answer is still no—because vendors are building wrappers around foundation models rather than kernel-level operating systems.
The architectural gap I identified in 2008 remains unfilled in 2025.
That’s why 90% will fail by 2026.
But it’s also why the 10% who understand this will survive—and why readiness assessment combined with AI Operating System infrastructure represents the path forward.
The crash is predictable. The solution exists. The question is whether organizations will learn from 26 years of failure patterns, or repeat them one more time.
In 2008, I predicted we’d need an AI Operating System for the Web 4.0 era.
In 2025, agentic AI has arrived—and we still don’t have it.
The 10% who survive will be those who build it before deployment, not after failure.
Related Reading:
30
In 2008, I Predicted Agentic AI and the Need for an Operating System to Break ProcureTech’s Failure Cycle. Here’s Why 90% Will Still Fail by 2026
Posted on November 8, 2025
0
Max Henry’s prediction validates what I’ve been documenting for 26 years—and building since 1998.
The 2008 Question That Predicted Today’s Crisis
Seventeen years ago, I posed a question to the procurement community:
“Does your enterprise have a ‘Digital Nervous System’?”
I was referencing Bill Gates’ 1999 vision from Business @ The Speed of Thought—real-time collaborative intelligence infrastructure that would enable organizations to adapt with the speed and agility of a living organism.
The answer then was no.
In 2025, with enterprises rushing to deploy AI agents, the answer is still no.
And that’s why Max Henry’s recent prediction will prove accurate: 90% of AI agent providers in logistics will disappear by 2026.
Not because the technology is bad. Because organizations are deploying it without the infrastructure to govern it.
What I Saw in 2008 (That’s Playing Out Today)
In that 2008 post, I explained why organizations couldn’t achieve the “Digital Nervous System” Gates envisioned. The problem wasn’t technology limitations—it was architectural mismatch.
I described two competing modeling approaches:
Equation-Based Modeling (What Most Vendors Built):
Agent-Based Modeling (What I’d Been Building Since 1998):
I also wrote this:
That was 2008. I was predicting Web 4.0—intelligent, autonomous, collaborative systems.
That’s what agentic AI represents today.
I also identified why organizations were stuck:
Seventeen years later, the same pattern repeats.
Only now it’s not Oracle and SAP—it’s the vendors wrapping public LLMs and calling them “AI agents.”
[Read my full 2008 post: https://procureinsights.com/2008/04/05/do-you-practice-business-the-speed-of-thought-does-your-enterprise-have-a-digital-nervous-system-if-not-why/]
Max Henry’s 2025 Prediction: The Pattern Repeats
Max Henry recently published an analysis that mirrors what I’ve been documenting for 26 years. His central thesis:
More than 90% of AI agent providers in logistics will disappear by 2026.
His reasoning cuts through the hype with clarity:
Most vendors are selling rebranded workflow automation as “AI agents.” They’re building trigger-based sequences, not genuine agentic intelligence. They don’t own the underlying technology—just wrappers around public LLMs like OpenAI or Anthropic. When foundation model providers raise prices or change capabilities, these vendors’ entire business models collapse.
One commenter on his article captured it perfectly: “99% of solutions are closed AI wrappers on workflows dressed up as agents.”
Another noted that some vendors are literally using n8n (a workflow automation tool) paired with public LLMs, charging enterprise prices for off-the-shelf components they don’t even own.
Max Henry’s conclusion: “AI in logistics is valuable only when built into the core workflow with strategy, scale, and resilience. Everything else is a cash grab agent circus.”
He’s right. But this isn’t new.
The Failure Pattern: 26 Years of Consistent Data
I’ve been tracking implementation failures across procurement and supply chain technology since 1998. The numbers are remarkably consistent:
The root cause is always identical: Technology deployed before organizational readiness is assessed.
Organizations rush to adopt—driven by FOMO, vendor hype, and promises of frictionless automation. Consulting firms and technology vendors oversell capabilities. Implementations begin before stakeholders even agree on what “autonomous” or “intelligent” means in their context.
Then reality arrives. “Exceptions” aren’t rare edge cases—they’re hundreds of repeatable scenarios that reveal readiness gaps. The AI throws 40% of tasks back to humans. Finance doesn’t recognize the value Procurement claims. The C-suite questions the ROI.
The technology gets blamed. Practitioners get blamed. Consultants produce reports about “change management failures.”
But the real problem was never acknowledged: No one assessed whether the organization was ready before deployment.
The Exception Handling Myth (And What It Really Reveals)
Max Henry’s article includes one of the most insightful observations I’ve seen about current AI agent deployments:
This connects directly to what I’ve been calling the “invisible scoreboard” problem.
Exceptions aren’t AI failures. They’re readiness gaps made visible.
When an AI agent throws a task back to a human 40% of the time, that’s not evidence of edge cases or technological limitations. It’s evidence that:
The vendors selling these solutions understand this reality. A ProcureTech VP once admitted to me:
“We all take on customers who we know have little chance for success… But if we don’t bring them on, a competitor will.”
It’s not incompetence. It’s the business model.
The 10% Who Will Survive: What They’ll Have in Common
Max Henry identified what the survivors will share. They’ll build:
I’d add a fourth characteristic that Max Henry implied but didn’t name explicitly:
4. Readiness assessment BEFORE deployment
The 10% who survive won’t just build better technology. They’ll deploy it differently. They’ll assess organizational preparedness first:
This is what I formalized as the Hansen Fit Score (HFS): a readiness assessment methodology that measures organizational preparedness across seven dimensions before technology deployment.
HFS prevents the 80-90% failure rate by identifying readiness gaps BEFORE deployment, not after.
The Digital Nervous System Gap: What’s Still Missing
Max Henry’s article exposes a critical infrastructure gap: most AI agent vendors don’t own the technology they’re selling.
This is the same problem I identified in 2008 when I explained why organizations couldn’t achieve Gates’ “Digital Nervous System” vision. Then, organizations were locked into Oracle, SAP, and Ariba’s equation-based architectures. Today, they’re being locked into OpenAI and Anthropic wrapper-based agents.
The lock-in pattern repeats because the architectural approach hasn’t changed.
This is why I’ve been writing about the need for what I call an “AI Operating System”—infrastructure that sits ABOVE the model layer, providing governance, orchestration, and human validation regardless of which foundation models you’re using.
Think of it like Windows for personal computing. Windows didn’t replace the hardware—it provided an operating system that made the hardware accessible and useful. When one hardware component failed or became obsolete, you replaced it. The operating system remained stable.
An AI Operating System provides the same function for generative AI:
I first wrote about this need in 2008 as the “Digital Nervous System.” I’ve continued developing the concept through multiple posts:
The need has been clear since 2008. The market is only now recognizing it.
Why the Crash Is Predictable (And Has Happened Before)
Max Henry referenced a commenter’s list of previous “instant revolutions”:
I’d add to that list:
The pattern repeats because the incentives don’t change:
Vendors profit from hype cycles regardless of implementation success. Consulting firms make money on deployments, not outcomes. When projects fail, they blame practitioners for “not being ready”—even though readiness assessment was never part of the sales process.
As I documented in November 2024: “Is Gartner Blaming Practitioners for Following… Gartner’s Advice?”
The 90% crash Max Henry predicts will follow the same predictable script:
This is predictable. This is preventable.
What Organizations Should Do NOW
Max Henry offered clear advice: “Do your diligence.”
I’d make it more specific. Before deploying any AI agent solution, organizations should ask three categories of questions:
Readiness Questions (Ask YOURSELF):
Vendor Questions (Ask THEM):
Infrastructure Questions (Ask BOTH):
The Path Forward: Readiness First, Technology Second
The AI agent crash is coming. Max Henry’s 90% failure prediction aligns perfectly with 26 years of data showing the same pattern across every technology wave.
But the 10% who survive won’t just have better technology. They’ll have fundamentally different deployment methodology:
Most importantly, they’ll understand what I wrote in 2008 and what Max Henry is saying in 2025:
Real-time collaborative intelligence requires infrastructure, not just technology.
The Question No One Is Asking
Max Henry’s article reminded me of the question I posed in November 2024:
“If someone sells you a boat with a hole in it and the boat sinks, whose fault is it?”
Ninety percent of AI agent vendors are selling boats with holes in them. The holes are called “readiness gaps” and “infrastructure deficits.” The vendors know the holes exist. They sell the boats anyway.
When 90% of these boats sink in 2026, who will receive the blame?
If history is any guide: the practitioners who bought the boats will be blamed, not the vendors who sold them knowing they’d sink.
My 27-Year Journey: From RAM to HFS to AI Operating Systems
This isn’t theoretical for me. I’ve been building toward this solution for 27 years:
1998: RAM (Relational Acquisition Model)
2008: Digital Nervous System Question
2007-2025: Pattern Documentation
2024-2025: AI Operating System Architecture
2025: Hansen Fit Score (HFS)
And most recently, I’ve been developing what I’m calling an AI Operating System that I first conceptualized in 1998.
Conclusion: The 2008 Vision Meets 2025 Reality
Seventeen years ago, I asked if organizations had built the “Digital Nervous System” that Bill Gates envisioned.
The answer was no—because the underlying architecture (equation-based modeling) couldn’t deliver what was needed (real-time collaborative intelligence).
Today, Max Henry asks essentially the same question about AI agents: Do organizations have the infrastructure to govern autonomous intelligence?
The answer is still no—because vendors are building wrappers around foundation models rather than kernel-level operating systems.
The architectural gap I identified in 2008 remains unfilled in 2025.
That’s why 90% will fail by 2026.
But it’s also why the 10% who understand this will survive—and why readiness assessment combined with AI Operating System infrastructure represents the path forward.
The crash is predictable. The solution exists. The question is whether organizations will learn from 26 years of failure patterns, or repeat them one more time.
In 2008, I predicted we’d need an AI Operating System for the Web 4.0 era.
In 2025, agentic AI has arrived—and we still don’t have it.
The 10% who survive will be those who build it before deployment, not after failure.
Related Reading:
30
Share this:
Related