The Reality Check
If you’ve been tracking procurement technology consolidation, you know my prediction from January 2025: 75% of logos on solution maps would be gone by year-end.
I was aggressive in that timeline, and the pushback was warranted.
Here’s where we actually stand (October 1, 2025):
In January 2025, I tracked 487 distinct procurement technology vendors across major solution maps (Spend Matters, Gartner, independent analyst coverage, and procurement tech directories). As of today, we’ve documented 94 exits—19.3% through acquisition/absorption, operational shutdown, or hibernation (no active sales/marketing for 6+ months).
The revised forecast: 75% remains the correct number, just the wrong date. At our current velocity, we’re tracking toward 30-35% attrition by December 31, 2025, with 75% consolidation expected to reach completion by the end of 2026. The original timeline was aggressive, but not directionally wrong—rapid exits in the first half of 2025 validated the thesis while proving the pace estimate needed adjustment. The rate of structural change remains historic; it’s simply occurring over 24 months rather than 12.
The underlying thesis hasn’t changed: the procurement technology market is massively oversupplied relative to the addressable market of organizations actually ready to deploy these solutions. The correction is happening—just over a longer timeline than I initially projected.
**Note: Exit = acquired/absorbed; ceased ops; or ‘hibernating’ = no sales/marketing for ≥6 months + objective signals (staff departures, job cuts, site/app inactivity).
McKinsey and Accenture are scrambling to catch up to where practitioners already are—realizing that “AI-powered procurement” isn’t a strategy, it’s table stakes. The firms surviving aren’t those with the most features. They’re the ones who understand technology amplifies outcomes; it doesn’t create them.
Which brings me to DPW Amsterdam (October 7-9, 2025). With 120+ exhibitors, you’ll hear identical buzzwords at 90% of booths: agentic AI, autonomous sourcing, AI-powered optimization. Most can’t answer the one question that matters:
“How do you assess whether my organization is ready for your solution, or whether it will just automate our existing misalignment faster?”
Here’s who can answer that question—and why they’re worth your limited conference time.
The Hansen Fit Score Framework: Six Dimensions That Predict Success
Before I tell you who to meet, here’s how I assess providers. The Hansen Fit Score evaluates six dimensions that determine implementation success:
The Six Dimensions
1. Metaprise Alignment
Platform/process/people/policy fit to your enterprise operating model and data/semantic architecture. Does the solution integrate with how your organization actually works, or does it require rebuilding your operational foundation?
2. Agent-Based Adaptability
Adaptive decision-making, feedback loops, scenario learning. Does the provider lead with problem definition (methodology-driven) or technology deployment (equation-based)? This is the single biggest predictor of implementation success.
3. Strand Commonality
Interoperability, taxonomy/ontology coherence, integration density across systems/stakeholders/suppliers. Can the solution work with your existing technology estate, or does it require a rip-and-replace?
4. Practitioner Context Fit
Industry, size, maturity, and regulatory context. Is the solution designed for organizations like yours, or are you trying to force-fit a solution built for different constraints?
5. Implementation Probability
Pilot-to-scale readiness, change capacity, time-to-value risk. Does the provider assess your readiness, or do they just sell to anyone who’ll buy?
6. ROI & Outcome Realization
KPI lift durability, black-swan resilience, and benefit retention over time. Can the provider demonstrate sustained outcomes (not just deployment success), and do those outcomes persist when market conditions change?
How Scoring Works
Each dimension is scored 0-10:
- 0-3: Absent/immature capability
- 4-6: Functional but brittle (works in controlled conditions, fails under stress)
- 7-8: Robust and consistent across contexts
- 9-10: Exemplary, proven in complex environments
Overall HFS scores:
- 7.5-10.0: Strong fit, high implementation success probability
- 6.0-7.4: Context-dependent fit (works for specific organizational maturity levels)
- Below 6.0: High implementation risk regardless of technology capability
Important: The weighted formula and related algorithms that combines these six dimensions is proprietary—because publishing it would allow providers to game their positioning rather than genuinely build capability. What matters: these six dimensions predict implementation success more reliably than feature lists or vendor market share.
Quarterly reassessments: HFS scores reflect Q3 2025 data (July-September). Scores update quarterly to capture M&A activity, product evolution, and emerging client outcome data.
Uncertainty bands: Where evidence is marketing-sourced or case studies are limited, scores include ±0.2-0.5 uncertainty ranges. Transparent methodology matters more than false precision.
Providers I’m Advising (And Why)
ConvergentIS
Q3 2025 HFS: Assessment in progress (partnership ongoing)
Why they’re different: SAP partner specializing in back-end AI operating systems—the coordination infrastructure enabling front-end tools to work together. They understand integration at the ontological level (do your systems share compatible frameworks?), not just technical integration (do APIs connect?).
Key strength: Exceptional Metaprise Alignment and Strand Commonality scores. They assess whether organizational functions operate from compatible ontological frameworks before attempting systems integration.
Who should meet them: Organizations at 7.5+ HFS implementing multiple procurement technologies that need to coordinate (not just share data). If you’re integrating CLM, S2P, and spend analytics, ConvergentIS ensures they speak the same language.
What to ask: “How do you assess whether our Finance, Legal, and Procurement functions operate from compatible ontological frameworks before attempting systems integration?”
Good answer: Discusses ontological readiness assessment, identifies framework misalignments, and creates strand commonality before deploying integration technology.
Red flag answer: Talks about API connections, data mapping, and middleware layers—purely technical integration without addressing conceptual alignment.
Best for: Enterprise organizations (7.5+ HFS) with multiple procurement systems needing coordination infrastructure, not just point-to-point connections.
Frequently Requested Assessments: What The Scores Reveal
Ivalua
- Metaprise Alignment: 7.8
- Agent-Based Adaptability: 5.5
- Strand Commonality: 7.2
- Practitioner Context Fit: 6.8
- Implementation Probability: 6.2
- ROI & Outcome Realization: 7.0
- Overall HFS: 6.67 (±0.3 uncertainty band)
What this means: Comprehensive S2P platform with strong technical capability. The 5.5 Agent-Based Adaptability score indicates they tend to lead with platform deployment rather than problem definition—not a criticism, just a match consideration.
Key insight: Ivalua excels when the problem is well-defined and a robust execution infrastructure is required. They struggle when clients are still in the problem discovery phase.
Case evidence:
- Time-to-value: 9-14 months for organizations with 7.0+ HFS
- Time-to-value: 18-24+ months for organizations below 6.5 HFS
- Adoption success: 78% for pre-defined requirements, 43% for exploratory deployments
Who should meet them: Organizations at 7.0+ HFS who have already identified their procurement problems and need a platform to execute known solutions. If you’re still figuring out what to solve, Ivalua isn’t your first stop.
What to ask: “What readiness assessment do you conduct before implementation?”
Good answer: Describes capability maturity evaluation, process documentation requirements, organizational alignment verification.
Red flag answer: “We have a comprehensive change management plan” or “Our platform is intuitive enough that readiness isn’t a concern.”
Best for: Large enterprises (7.0+ HFS) with documented procurement processes, clear requirements, and proven ability to manage complex system implementations.
Zycus (Merlin AI)
- Metaprise Alignment: 8.2
- Agent-Based Adaptability: 6.8
- Strand Commonality: 7.8
- Practitioner Context Fit: 7.5
- Implementation Probability: 7.0
- ROI & Outcome Realization: 8.1
- Overall HFS: 7.63 (±0.2 uncertainty band)
What this means: Merlin represents genuinely sophisticated agentic AI for procurement—one of the strongest technology platforms at DPW. The 6.8 Agent-Based Adaptability score indicates solid methodology alignment (above the 6.0 threshold for problem-definition focus).
Key distinction: True agentic AI (autonomous decision-making with defined parameters and escalation protocols) versus automation with AI labeling. Merlin is the former.
Case evidence:
- Sourcing cycle time reduction: 40-60% in the first 12 months
- Supplier discovery improvement: 3-5x more qualified options identified
- User adoption: 82% sustained usage after 6 months (high for advanced AI)
Who should meet them: Organizations at 7.5+ HFS looking for advanced AI capability and autonomous sourcing sophistication. Merlin works when you have mature sourcing processes that AI can optimize.
What to ask: “Can you walk through a client scenario where Merlin’s agentic capabilities identified opportunities your team hadn’t considered?”
Good answer: Describes specific examples where AI reasoned beyond programmed rules to surface non-obvious supplier options or cost reduction strategies.
Red flag answer: Generic descriptions of “AI-powered recommendations” or “machine learning optimization” without concrete autonomy examples.
Best for: Large enterprises (7.5+ HFS) with sophisticated sourcing organizations ready to delegate tactical decisions to AI while maintaining strategic oversight.
Tealbook (The Cautionary Tale)
- Metaprise Alignment: 7.2
- Agent-Based Adaptability: 2.8
- Strand Commonality: 7.5
- Practitioner Context Fit: 8.0
- Implementation Probability: 5.2
- ROI & Outcome Realization: 6.8
- Overall HFS: 6.50 (±0.4 uncertainty band)
What this means: Exceptional supplier intelligence technology (strong technical scores) undermined by critically low Agent-Based Adaptability. The 2.8 score means they lead almost entirely with technology deployment, with minimal readiness assessment or problem-definition focus.
The lesson: You can have world-class technology capability and still produce mediocre outcomes if you don’t match solutions to client readiness. This is why I publish these scores—to demonstrate that technology capability ≠ implementation success.
Case evidence:
- Successful deployments: 89% occur with clients at 8.0+ HFS
- Struggling deployments: 71% occur with clients below 7.5 HFS
- The pattern: Technology works brilliantly when organizational capability exists; fails to deliver value when capability building is required first
Who should meet them: Organizations at 8.0+ HFS with sophisticated supplier intelligence processes needing better tools. If you’re below 7.5 HFS, Tealbook will struggle to deliver value despite strong technology.
What to ask: “What percentage of your implementations require significant client capability building before the technology delivers value?”
Honest answer should be: “Most implementations with clients below 7.5 sophistication require 6-12 months of capability development before our technology produces meaningful outcomes.”
Deflecting answer: “Our platform is intuitive enough that most clients see value immediately,” or “Change management handles any adoption challenges.”
Best for: Enterprise organizations (8.0+ HFS) with mature supplier intelligence capabilities needing advanced data aggregation and market intelligence tools.
The Providers To Approach With Caution (And Why)
I won’t name specific companies, but here are the red flags that send me walking past booths:
Red Flag #1: “AI-powered” as primary differentiation
If AI is your main selling point rather than the problem it solves, you’re selling technology theater. Real agentic AI coordinates multiple agents with conflicting objectives toward shared outcomes (like Zycus Merlin, or the algorithms I built for DND in 1998 that took SLA performance from 51% to 97.3% in 90 days). Fake agentic AI just automates existing processes faster.
The test: Ask them to explain the difference between agentic AI (autonomous decision-making within defined parameters) and automation with AI labels. 80% can’t articulate the distinction.
Red Flag #2: “Change management” as value proposition
Change management assumes the solution is right and people need to adapt. Agent-based methodology assumes people’s current behavior reflects structural problems technology alone won’t fix.
The difference: Change management = “train users to adopt the system.” Capability building = “ensure users can solve problems the system is meant to amplify.”
Red Flag #3: No case studies with measurable, sustained outcomes
“Deployed to 47 countries” isn’t an outcome—it’s an activity. “Reduced MRO costs 23% annually over seven years” is an outcome (DND, 1998-2005). “Improved sourcing cycle time 40%” is an outcome if it persists beyond initial deployment.
What to look for: Multi-year case data showing outcomes sustained through market volatility, not just first-year “success stories.”
Red Flag #4: Can’t articulate readiness assessment approach
If they can’t tell you how they determine whether you’re ready for their solution, they’re selling to anyone who’ll buy—which means high failure rates. Providers serious about implementation success disqualify prospects who aren’t ready.
The question: “What percentage of prospects do you turn away because they’re not ready for your solution?”
Good answer: “20-30%—we’d rather lose a sale than damage our implementation success rate.”
Red flag answer: “We can work with any organization” or “Our platform scales to any maturity level.”
Industry Commentary: What’s Actually Happening
The Consolidation Accelerates
Recent developments since my January 2025 baseline:
- Q1 2025: Coupa acquired Cirtuo (category management gap-fill), TPG/Corpay acquired AvidXchange ($2.2B)
- Q2 2025: Multiple AP automation consolidation, procurement services market restructuring
- Q3 2025: 19.3% logo attrition year-to-date, pace accelerating into Q4
McKinsey and Accenture are doubling down on procurement technology consulting—but leading with frameworks developed for different eras. Practitioners increasingly ask: “Who will still be here in 18 months when we need support?”
The Agentic AI Hype Cycle Peaks
Everyone claims “agentic AI” now. Here’s the filter:
Real agentic AI coordinates multiple agents (internal teams, external suppliers, systems) with conflicting objectives toward shared outcomes. It reasons beyond programmed rules to identify non-obvious solutions.
Fake agentic AI automates existing workflows with better pattern recognition. Valuable, but not autonomous decision-making.
91% of procurement technology vendors now claim GenAI capability (Spend Matters Fall 2025 data). Less than 15% demonstrate true agentic autonomy in client implementations.
What Practitioners Actually Want
In the past 90 days, the most common question I get: “How do we know if we’re ready for [Solution X]?”
Not “which solution is best?” Not “what’s the ROI?” However, readiness assessment—which most vendors don’t provide because it might disqualify the sale —is crucial.
This is why the Hansen Fit Score exists. Technology capability matters. However, Agent-Based Adaptability, Implementation Probability, and Organizational Readiness determine whether implementations succeed or become expensive, failed experiments.
The 85% Problem Nobody’s Solving
My analysis suggests 12-18% of organizations operate at 7.0+ HFS (sophisticated enough to successfully deploy advanced procurement technology). That means 82-88% of the market isn’t ready for the solutions most vendors are selling.
This explains:
- Why 200+ vendors chase the same 15% of prospects
- Why implementation success rates remain 20-30% industry-wide
- Why my consolidation prediction remains on track (timeline adjusted, direction validated)
- Why mid-market ($500M-$5B revenue) remains chronically underserved
The vendors surviving aren’t those with the best technology. They’re those who assess readiness, build capability when needed, and deploy solutions matched to client maturity.
How to Use Your DPW Time Strategically
Before the conference:
- Assess your organization’s HFS honestly (or contact me—I provide these assessments to practitioners)
- Identify your top 3-5 problems (not solutions you want, problems you have)
- Research which exhibitors focus on those problem domains (not just which have relevant features)
During the conference:
- Ask every vendor: “How do you assess whether we’re ready for your solution?”
- Skip vendors who can’t answer or deflect to “we’ll handle that in implementation”
- Prioritize vendors who match your HFS level—don’t waste time on 8.5+ solutions if you’re a 6.0 organization
- Attend practitioner sessions over vendor pitches—hallway conversations with peers solving similar problems are worth more than expo floor demos
After the conference:
- Compare what vendors promised to what your problems actually require
- Assess whether you need capability building (Focal Point) before technology deployment
- Remember: Technology amplifies whatever organizational state you’re in—aligned or misaligned. If your Finance, Legal, and Procurement functions operate from incompatible ontologies (different frameworks for understanding “value,” “risk,” “success”), deploying technology just automates the conflict faster.
The Real Value of DPW Isn’t the Expo Floor
It’s the hallway conversations with practitioners solving similar problems. It’s discovering that your “unique challenge” is a common pattern with known solutions. It’s learning which vendors overpromise and which deliver.
I would be spending most of my time in those hallways, not on the expo floor. Because 40+ years of technology and procurement transformation work has taught me: the hardest problems are never technology problems.
They’re ontological alignment problems (do our functions share compatible frameworks for understanding value?). They’re agent coordination problems (why do our teams behave this way, and what constraints create that behavior?). They’re organizational readiness problems (do we have the capability to use this technology successfully?).
Technology just makes the outcomes visible faster—good or bad.
Enjoy Amsterdam.
Jon W. Hansen
Procurement Insights
Creator, Hansen Fit Score Methodology
Want to assess your organization’s HFS before DPW? Contact me. I provide these evaluations for practitioners at no cost for this conference—because knowing whether you’re ready for a solution is more valuable than buying the wrong one.
Track the consolidation: Visit procureinsights.com for updates of procurement technology market exits, updated quarterly with M&A announcements, shutdowns, and market movements.
30
BONUS COVERAGE – ASSESSING DIGITAL READINESS
DPW Amsterdam 2025: Which Solution Providers Are Actually Worth Your Time
Posted on October 2, 2025
0
The Reality Check
If you’ve been tracking procurement technology consolidation, you know my prediction from January 2025: 75% of logos on solution maps would be gone by year-end.
I was aggressive in that timeline, and the pushback was warranted.
Here’s where we actually stand (October 1, 2025):
In January 2025, I tracked 487 distinct procurement technology vendors across major solution maps (Spend Matters, Gartner, independent analyst coverage, and procurement tech directories). As of today, we’ve documented 94 exits—19.3% through acquisition/absorption, operational shutdown, or hibernation (no active sales/marketing for 6+ months).
The revised forecast: 75% remains the correct number, just the wrong date. At our current velocity, we’re tracking toward 30-35% attrition by December 31, 2025, with 75% consolidation expected to reach completion by the end of 2026. The original timeline was aggressive, but not directionally wrong—rapid exits in the first half of 2025 validated the thesis while proving the pace estimate needed adjustment. The rate of structural change remains historic; it’s simply occurring over 24 months rather than 12.
The underlying thesis hasn’t changed: the procurement technology market is massively oversupplied relative to the addressable market of organizations actually ready to deploy these solutions. The correction is happening—just over a longer timeline than I initially projected.
**Note: Exit = acquired/absorbed; ceased ops; or ‘hibernating’ = no sales/marketing for ≥6 months + objective signals (staff departures, job cuts, site/app inactivity).
McKinsey and Accenture are scrambling to catch up to where practitioners already are—realizing that “AI-powered procurement” isn’t a strategy, it’s table stakes. The firms surviving aren’t those with the most features. They’re the ones who understand technology amplifies outcomes; it doesn’t create them.
Which brings me to DPW Amsterdam (October 7-9, 2025). With 120+ exhibitors, you’ll hear identical buzzwords at 90% of booths: agentic AI, autonomous sourcing, AI-powered optimization. Most can’t answer the one question that matters:
“How do you assess whether my organization is ready for your solution, or whether it will just automate our existing misalignment faster?”
Here’s who can answer that question—and why they’re worth your limited conference time.
The Hansen Fit Score Framework: Six Dimensions That Predict Success
Before I tell you who to meet, here’s how I assess providers. The Hansen Fit Score evaluates six dimensions that determine implementation success:
The Six Dimensions
1. Metaprise Alignment
Platform/process/people/policy fit to your enterprise operating model and data/semantic architecture. Does the solution integrate with how your organization actually works, or does it require rebuilding your operational foundation?
2. Agent-Based Adaptability
Adaptive decision-making, feedback loops, scenario learning. Does the provider lead with problem definition (methodology-driven) or technology deployment (equation-based)? This is the single biggest predictor of implementation success.
3. Strand Commonality
Interoperability, taxonomy/ontology coherence, integration density across systems/stakeholders/suppliers. Can the solution work with your existing technology estate, or does it require a rip-and-replace?
4. Practitioner Context Fit
Industry, size, maturity, and regulatory context. Is the solution designed for organizations like yours, or are you trying to force-fit a solution built for different constraints?
5. Implementation Probability
Pilot-to-scale readiness, change capacity, time-to-value risk. Does the provider assess your readiness, or do they just sell to anyone who’ll buy?
6. ROI & Outcome Realization
KPI lift durability, black-swan resilience, and benefit retention over time. Can the provider demonstrate sustained outcomes (not just deployment success), and do those outcomes persist when market conditions change?
How Scoring Works
Each dimension is scored 0-10:
Overall HFS scores:
Important: The weighted formula and related algorithms that combines these six dimensions is proprietary—because publishing it would allow providers to game their positioning rather than genuinely build capability. What matters: these six dimensions predict implementation success more reliably than feature lists or vendor market share.
Quarterly reassessments: HFS scores reflect Q3 2025 data (July-September). Scores update quarterly to capture M&A activity, product evolution, and emerging client outcome data.
Uncertainty bands: Where evidence is marketing-sourced or case studies are limited, scores include ±0.2-0.5 uncertainty ranges. Transparent methodology matters more than false precision.
Providers I’m Advising (And Why)
ConvergentIS
Q3 2025 HFS: Assessment in progress (partnership ongoing)
Why they’re different: SAP partner specializing in back-end AI operating systems—the coordination infrastructure enabling front-end tools to work together. They understand integration at the ontological level (do your systems share compatible frameworks?), not just technical integration (do APIs connect?).
Key strength: Exceptional Metaprise Alignment and Strand Commonality scores. They assess whether organizational functions operate from compatible ontological frameworks before attempting systems integration.
Who should meet them: Organizations at 7.5+ HFS implementing multiple procurement technologies that need to coordinate (not just share data). If you’re integrating CLM, S2P, and spend analytics, ConvergentIS ensures they speak the same language.
What to ask: “How do you assess whether our Finance, Legal, and Procurement functions operate from compatible ontological frameworks before attempting systems integration?”
Good answer: Discusses ontological readiness assessment, identifies framework misalignments, and creates strand commonality before deploying integration technology.
Red flag answer: Talks about API connections, data mapping, and middleware layers—purely technical integration without addressing conceptual alignment.
Best for: Enterprise organizations (7.5+ HFS) with multiple procurement systems needing coordination infrastructure, not just point-to-point connections.
Frequently Requested Assessments: What The Scores Reveal
Ivalua
What this means: Comprehensive S2P platform with strong technical capability. The 5.5 Agent-Based Adaptability score indicates they tend to lead with platform deployment rather than problem definition—not a criticism, just a match consideration.
Key insight: Ivalua excels when the problem is well-defined and a robust execution infrastructure is required. They struggle when clients are still in the problem discovery phase.
Case evidence:
Who should meet them: Organizations at 7.0+ HFS who have already identified their procurement problems and need a platform to execute known solutions. If you’re still figuring out what to solve, Ivalua isn’t your first stop.
What to ask: “What readiness assessment do you conduct before implementation?”
Good answer: Describes capability maturity evaluation, process documentation requirements, organizational alignment verification.
Red flag answer: “We have a comprehensive change management plan” or “Our platform is intuitive enough that readiness isn’t a concern.”
Best for: Large enterprises (7.0+ HFS) with documented procurement processes, clear requirements, and proven ability to manage complex system implementations.
Zycus (Merlin AI)
What this means: Merlin represents genuinely sophisticated agentic AI for procurement—one of the strongest technology platforms at DPW. The 6.8 Agent-Based Adaptability score indicates solid methodology alignment (above the 6.0 threshold for problem-definition focus).
Key distinction: True agentic AI (autonomous decision-making with defined parameters and escalation protocols) versus automation with AI labeling. Merlin is the former.
Case evidence:
Who should meet them: Organizations at 7.5+ HFS looking for advanced AI capability and autonomous sourcing sophistication. Merlin works when you have mature sourcing processes that AI can optimize.
What to ask: “Can you walk through a client scenario where Merlin’s agentic capabilities identified opportunities your team hadn’t considered?”
Good answer: Describes specific examples where AI reasoned beyond programmed rules to surface non-obvious supplier options or cost reduction strategies.
Red flag answer: Generic descriptions of “AI-powered recommendations” or “machine learning optimization” without concrete autonomy examples.
Best for: Large enterprises (7.5+ HFS) with sophisticated sourcing organizations ready to delegate tactical decisions to AI while maintaining strategic oversight.
Tealbook (The Cautionary Tale)
What this means: Exceptional supplier intelligence technology (strong technical scores) undermined by critically low Agent-Based Adaptability. The 2.8 score means they lead almost entirely with technology deployment, with minimal readiness assessment or problem-definition focus.
The lesson: You can have world-class technology capability and still produce mediocre outcomes if you don’t match solutions to client readiness. This is why I publish these scores—to demonstrate that technology capability ≠ implementation success.
Case evidence:
Who should meet them: Organizations at 8.0+ HFS with sophisticated supplier intelligence processes needing better tools. If you’re below 7.5 HFS, Tealbook will struggle to deliver value despite strong technology.
What to ask: “What percentage of your implementations require significant client capability building before the technology delivers value?”
Honest answer should be: “Most implementations with clients below 7.5 sophistication require 6-12 months of capability development before our technology produces meaningful outcomes.”
Deflecting answer: “Our platform is intuitive enough that most clients see value immediately,” or “Change management handles any adoption challenges.”
Best for: Enterprise organizations (8.0+ HFS) with mature supplier intelligence capabilities needing advanced data aggregation and market intelligence tools.
The Providers To Approach With Caution (And Why)
I won’t name specific companies, but here are the red flags that send me walking past booths:
Red Flag #1: “AI-powered” as primary differentiation
If AI is your main selling point rather than the problem it solves, you’re selling technology theater. Real agentic AI coordinates multiple agents with conflicting objectives toward shared outcomes (like Zycus Merlin, or the algorithms I built for DND in 1998 that took SLA performance from 51% to 97.3% in 90 days). Fake agentic AI just automates existing processes faster.
The test: Ask them to explain the difference between agentic AI (autonomous decision-making within defined parameters) and automation with AI labels. 80% can’t articulate the distinction.
Red Flag #2: “Change management” as value proposition
Change management assumes the solution is right and people need to adapt. Agent-based methodology assumes people’s current behavior reflects structural problems technology alone won’t fix.
The difference: Change management = “train users to adopt the system.” Capability building = “ensure users can solve problems the system is meant to amplify.”
Red Flag #3: No case studies with measurable, sustained outcomes
“Deployed to 47 countries” isn’t an outcome—it’s an activity. “Reduced MRO costs 23% annually over seven years” is an outcome (DND, 1998-2005). “Improved sourcing cycle time 40%” is an outcome if it persists beyond initial deployment.
What to look for: Multi-year case data showing outcomes sustained through market volatility, not just first-year “success stories.”
Red Flag #4: Can’t articulate readiness assessment approach
If they can’t tell you how they determine whether you’re ready for their solution, they’re selling to anyone who’ll buy—which means high failure rates. Providers serious about implementation success disqualify prospects who aren’t ready.
The question: “What percentage of prospects do you turn away because they’re not ready for your solution?”
Good answer: “20-30%—we’d rather lose a sale than damage our implementation success rate.”
Red flag answer: “We can work with any organization” or “Our platform scales to any maturity level.”
Industry Commentary: What’s Actually Happening
The Consolidation Accelerates
Recent developments since my January 2025 baseline:
McKinsey and Accenture are doubling down on procurement technology consulting—but leading with frameworks developed for different eras. Practitioners increasingly ask: “Who will still be here in 18 months when we need support?”
The Agentic AI Hype Cycle Peaks
Everyone claims “agentic AI” now. Here’s the filter:
Real agentic AI coordinates multiple agents (internal teams, external suppliers, systems) with conflicting objectives toward shared outcomes. It reasons beyond programmed rules to identify non-obvious solutions.
Fake agentic AI automates existing workflows with better pattern recognition. Valuable, but not autonomous decision-making.
91% of procurement technology vendors now claim GenAI capability (Spend Matters Fall 2025 data). Less than 15% demonstrate true agentic autonomy in client implementations.
What Practitioners Actually Want
In the past 90 days, the most common question I get: “How do we know if we’re ready for [Solution X]?”
Not “which solution is best?” Not “what’s the ROI?” However, readiness assessment—which most vendors don’t provide because it might disqualify the sale —is crucial.
This is why the Hansen Fit Score exists. Technology capability matters. However, Agent-Based Adaptability, Implementation Probability, and Organizational Readiness determine whether implementations succeed or become expensive, failed experiments.
The 85% Problem Nobody’s Solving
My analysis suggests 12-18% of organizations operate at 7.0+ HFS (sophisticated enough to successfully deploy advanced procurement technology). That means 82-88% of the market isn’t ready for the solutions most vendors are selling.
This explains:
The vendors surviving aren’t those with the best technology. They’re those who assess readiness, build capability when needed, and deploy solutions matched to client maturity.
How to Use Your DPW Time Strategically
Before the conference:
During the conference:
After the conference:
The Real Value of DPW Isn’t the Expo Floor
It’s the hallway conversations with practitioners solving similar problems. It’s discovering that your “unique challenge” is a common pattern with known solutions. It’s learning which vendors overpromise and which deliver.
I would be spending most of my time in those hallways, not on the expo floor. Because 40+ years of technology and procurement transformation work has taught me: the hardest problems are never technology problems.
They’re ontological alignment problems (do our functions share compatible frameworks for understanding value?). They’re agent coordination problems (why do our teams behave this way, and what constraints create that behavior?). They’re organizational readiness problems (do we have the capability to use this technology successfully?).
Technology just makes the outcomes visible faster—good or bad.
Enjoy Amsterdam.
Jon W. Hansen
Procurement Insights
Creator, Hansen Fit Score Methodology
Want to assess your organization’s HFS before DPW? Contact me. I provide these evaluations for practitioners at no cost for this conference—because knowing whether you’re ready for a solution is more valuable than buying the wrong one.
Track the consolidation: Visit procureinsights.com for updates of procurement technology market exits, updated quarterly with M&A announcements, shutdowns, and market movements.
30
BONUS COVERAGE – ASSESSING DIGITAL READINESS
Share this:
Related