By Jon W. Hansen | Procurement Insights
We analyzed the AI marketing messaging of 12 ProcureTech providers from the Analyst Quadrants, Waves, and Solution Maps.
The result: 86% engine. 14% steering wheel.
Every vendor leads with automation. No vendor leads with accountability.
But this isn’t just a marketing problem. It’s a design problem.
The Pattern
Across the industry, ProcureTech AI is being positioned the same way:
- Automation percentages
- Speed metrics
- “AI handles the heavy lifting”
- “Meet your new AI teammate”
What’s missing?
- Decision rights — Who decides what AI can and cannot act on?
- Escalation paths — Who governs the handoff when AI flags an exception?
- Verification protocols — Who checks before AI outputs scale?
- Accountability structures — Who’s responsible when AI gets it wrong?
The graphic above makes the gap visible. The engine columns are green. The steering wheel columns are red.
The Deeper Problem
Here’s what just became clear to me: Most ProcureTech AI is being designed to eliminate uncertainty rather than to support human judgment.
What’s being sold as “guardrails” increasingly looks like prison bars — rigid constraints that make systems safe to automate, not frameworks that empower humans to collaborate with AI decisions.
The goal isn’t human-in-the-loop.
The goal is human-out-of-the-loop.
Vendors aren’t building systems that surface tradeoffs for human weighting. They’re building systems that remove the need for human weighting altogether.
That’s not governance. That’s replacement theater dressed up as efficiency.
Certainty vs. Collaboration
The Deloitte scandal proved what happens when you optimize for certainty: $2 million in refunds, fabricated citations, invented quotes. The AI produced exactly what it was designed to produce — fluent, confident, wrong.
No one verified. No one cross-checked. No one asked: “Should we trust this before we deliver it?”
Because the system was designed to not need that question.
The Market Opportunity
The 86/14 split isn’t just a critique. It’s an opportunity.
The first vendor to credibly answer “Who’s accountable when your AI fails?” will differentiate immediately.
Not by having better AI — everyone claims that.
Not by automating more — that’s table stakes.
But by demonstrating that their AI is governed, verified, and designed for human collaboration.
That’s what CPOs are actually buying.
That’s what “enterprise-ready” actually means.
And right now, according to our analysis, no vendor leads with it.
The Bottom Line
Stop building for certainty. Start building for collaboration.
The automation percentage doesn’t matter if no one trusts it.
The guardrails don’t matter if they’re designed to exclude humans, not empower them.
The AI doesn’t matter if it can’t answer the governance question — the RIGHT governance question:
“Who’s accountable when this fails?”
Editor’s Note: The graphic above shows the structural gap in how ProcureTech AI is currently marketed. Subscribers to Procurement Insights have access to the full Hansen Governance Solution Map™, including detailed analysis of each solution provider, scoring methodology, Phase 0™ readiness implications, and the questions boards and CPOs should be asking before approving AI-driven initiatives.
ASK US ABOUT OUR SUBSCRIPTION SERVICE
-30-
The 86/14 Problem: Why ProcureTech AI Is Designed to Replace You, Not Empower You
Posted on January 26, 2026
0
By Jon W. Hansen | Procurement Insights
We analyzed the AI marketing messaging of 12 ProcureTech providers from the Analyst Quadrants, Waves, and Solution Maps.
The result: 86% engine. 14% steering wheel.
Every vendor leads with automation. No vendor leads with accountability.
But this isn’t just a marketing problem. It’s a design problem.
The Pattern
Across the industry, ProcureTech AI is being positioned the same way:
What’s missing?
The graphic above makes the gap visible. The engine columns are green. The steering wheel columns are red.
The Deeper Problem
Here’s what just became clear to me: Most ProcureTech AI is being designed to eliminate uncertainty rather than to support human judgment.
What’s being sold as “guardrails” increasingly looks like prison bars — rigid constraints that make systems safe to automate, not frameworks that empower humans to collaborate with AI decisions.
The goal isn’t human-in-the-loop.
The goal is human-out-of-the-loop.
Vendors aren’t building systems that surface tradeoffs for human weighting. They’re building systems that remove the need for human weighting altogether.
That’s not governance. That’s replacement theater dressed up as efficiency.
Certainty vs. Collaboration
The Deloitte scandal proved what happens when you optimize for certainty: $2 million in refunds, fabricated citations, invented quotes. The AI produced exactly what it was designed to produce — fluent, confident, wrong.
No one verified. No one cross-checked. No one asked: “Should we trust this before we deliver it?”
Because the system was designed to not need that question.
The Market Opportunity
The 86/14 split isn’t just a critique. It’s an opportunity.
The first vendor to credibly answer “Who’s accountable when your AI fails?” will differentiate immediately.
Not by having better AI — everyone claims that.
Not by automating more — that’s table stakes.
But by demonstrating that their AI is governed, verified, and designed for human collaboration.
That’s what CPOs are actually buying.
That’s what “enterprise-ready” actually means.
And right now, according to our analysis, no vendor leads with it.
The Bottom Line
Stop building for certainty. Start building for collaboration.
The automation percentage doesn’t matter if no one trusts it.
The guardrails don’t matter if they’re designed to exclude humans, not empower them.
The AI doesn’t matter if it can’t answer the governance question — the RIGHT governance question:
“Who’s accountable when this fails?”
Editor’s Note: The graphic above shows the structural gap in how ProcureTech AI is currently marketed. Subscribers to Procurement Insights have access to the full Hansen Governance Solution Map™, including detailed analysis of each solution provider, scoring methodology, Phase 0™ readiness implications, and the questions boards and CPOs should be asking before approving AI-driven initiatives.
ASK US ABOUT OUR SUBSCRIPTION SERVICE
-30-
Share this:
Related