I’ve been having valuable conversations this week with Michael Lamoureux, Bob Ferrari, and Joël Collin-Demers about why procurement transformations fail.
The consensus: Platform architecture thinking (Suites vs. Platforms) is advancing. This is excellent for the industry.
What’s missing: Asking the right Question #1 before deployment.
Because if you get Q1 wrong, everything else is fruit of the poisonous tree.
THE WRONG QUESTION #1 (Process-Focused)
Most consultants, analysts, and technology providers ask:
“What’s your ordering process?”
This seems logical. Map the process as designed:
Requisition → Approval → PO → Supplier → Delivery
Then choose the right platform architecture, deploy best-of-breed modules, orchestrate beautifully across systems.
And you still fail 70-95% of the time.
Why?
Because the process map doesn’t reveal the behavioral drivers and cross-system dependencies that determine actual outcomes.
You’ve automated the DESIGNED process, not the ACTUAL system.
THE RIGHT QUESTION #1 (Behavioral-Focused)
“What time of day do orders come in?”
This sounds like a strange question. What does timing have to do with platform architecture?
Everything.
CASE STUDY: DEPARTMENT OF NATIONAL DEFENSE (Late 1990s)
The Stated Problem:
- Contract required: 90% next-day delivery
- Actual performance: 51% next-day delivery
- Request: “Automate our MRO procurement system”
Standard Approach (Wrong Q1):
A traditional consultant would have asked: “What’s your ordering process?”
They would have mapped:
- Service technician identifies part need
- Submits requisition
- Procurement reviews/approves
- PO sent to supplier
- Supplier ships part
- Part arrives (or doesn’t)
Then they would have said: “You need a platform to orchestrate this process across systems.”
They would have deployed technology.
Result: Still 51% delivery.
Why? Because the process map was a lie.
WHAT “WHAT TIME DO ORDERS COME IN?” REVEALED
Answer: Most orders came in at 4:00 PM.
That single data point exposed the entire ecosystem of failure:
Agent 1: Service Department Technicians
- Their incentive: Maximize service calls completed per day
- Policy said: Order parts immediately after each service call
- Reality: They sandbagged all orders until end of shift (4 PM)
- Why: Ordering took time; batching at day’s end let them maximize their call count
- Their success metric: Calls completed ✅
- Their blind spot: Call close rate (horrific, because parts weren’t arriving to complete repairs)
Agent 2: Dynamic Flux Pricing Window
- 9:00 AM price: $100 for a computer component
- 4:00 PM price: $1,000 for the same component
- Impact: Late ordering = consistently paying premium prices
- Why it mattered: IT infrastructure parts exhibited “dynamic flux” characteristics—prices changed throughout the day based on real-time supply/demand
Agent 3: Small/Medium Suppliers
- Reality: Most suppliers were small US companies
- Challenge: Low customs sophistication
- Impact: Parts frequently held up at Canadian border
- Why: They didn’t know how to properly format customs clearance documentation for priority processing
Agent 4: Customs Clearance Process
- Rule: Computer parts could be cleared on priority basis
- Requirement: Properly formatted documentation
- Reality: Suppliers didn’t know the format
- Impact: Delays compounded the 4 PM ordering problem
Agent 5: Courier Dispatch
- Process: Manual notification to UPS
- Impact: Additional delays in the chain
- Opportunity: Could be automated if integrated
The entire ecosystem of failure was invisible to a process map.
No platform architecture—suite or orchestrated—would have fixed this without understanding these behavioral drivers.
FRUIT OF THE POISONOUS TREE
In legal doctrine, if initial evidence is obtained illegally, everything derived from it is inadmissible. The foundation is corrupted, so the entire case collapses.
In procurement transformation:
Wrong Q1 (“What’s your process?”)
↓
Wrong process map (shows designed flow, not actual behavior)
↓
Wrong platform configuration (automates the map, not reality)
↓
Wrong deployment (technology works as designed but fails in practice)
↓
70-95% failure rate
The foundation was poisoned from the start.
THE RIGHT APPROACH: ADDRESSING THE ECOSYSTEM
Once we understood the behavioral drivers, the solution became clear:
For Service Technicians (Agent 1):
Created a system so easy to use that ordering after each call became EASIER than sandbagging:
- One-click ordering
- No complex forms
- Instant confirmation
- They gradually learned: Order immediately → Parts arrive on time → Better call close rates → Better performance reviews
For Pricing Windows (Agent 2):
Built self-learning algorithms (yes, in the late 1990s) that:
- Weighted supplier performance on delivery reliability, not just price
- Captured dynamic flux pricing advantages (order at 9 AM, not 4 PM)
- Result: 23% cost reduction over 7 years
For Suppliers (Agent 3):
Made it ridiculously easy:
- Single PDF contained: Purchase Order + Pre-printed UPS waybill + Pre-formatted customs clearance documentation
- They didn’t need customs expertise
- They didn’t need to call a courier
- Print three forms, pack the box, done
For Customs (Agent 4):
- Pre-formatted documentation for priority clearance
- No supplier training required
- System generated compliant forms automatically
For Courier (Agent 5):
- Integrated directly with UPS systems
- Waybill number generated automatically
- UPS dispatched automatically when PO issued
- No manual coordination required
THE RESULTS
3 months: 51% → 97.3% next-day delivery
7 years: 23% sustained cost reduction (dynamic flux pricing captured)
18 months: Supplier base organically rationalized from 23 companies to 3 (not forced consolidation—natural market efficiency when barriers removed)
Service department: Call close rates improved dramatically as parts began arriving consistently
Procurement department: Met contract requirements, avoided losing the contract
Watch the video walkthrough of the DND case
WHAT IF WE’D ASKED THE WRONG Q1?
Scenario: Deploy platform architecture without asking “what time do orders come in?”
Result:
- ✅ Platform deployed successfully
- ✅ Best-of-breed modules integrated beautifully
- ✅ APIs working flawlessly
- ✅ Process orchestrated across systems
- ❌ Still 51% next-day delivery
Why?
- Service techs still sandbagging (their incentives unchanged)
- Still ordering at 4 PM (behavior unaddressed)
- Still paying $1,000 instead of $100 (pricing window missed)
- Suppliers still confused about customs (no documentation help)
- Parts still delayed at border (ecosystem unaddressed)
We would have automated failure at 51%.
That’s fruit of the poisonous tree.
THE FOUNDATION (1998): METAPRISE MODEL
This wasn’t an accident. I documented this architectural shift in the Metaprise Model (1998)—moving from centralized, rigid ERP systems to decentralized, adaptive network architectures.
The core insight then is the same as today:
Organizations succeed when they orchestrate across ecosystems, not when they force everything through monolithic ownership.
But architecture alone doesn’t determine success.
The DND case proves it:
- Right architecture (orchestrated network) ✅
- Right question (behavioral drivers) ✅
- Right outcome (97.3% delivery) ✅
vs.
- Right architecture (orchestrated platform) ✅
- Wrong question (process mapping) ❌
- Wrong outcome (51% delivery continues) ❌
THE MISSING LAYER IN PLATFORM DISCUSSIONS
Layer 1 (Platform Architecture): What technology can do
Layer 2 (Behavioral Readiness): Whether your organization can execute
Most platform discussions focus only on Layer 1.
They answer:
- Which architecture to choose (Suite vs. Platform)
- Why platforms are more flexible
- How orchestration works
- When to deploy what modules
They don’t answer:
- What factors OUTSIDE your procurement process influence outcomes?
- What behavioral drivers determine actual vs. designed performance?
- What cross-system dependencies exist?
- How do agent incentives align (or conflict)?
- What readiness gaps must close before deployment?
That’s where the Hansen Fit Score comes in.
THE DIAGNOSTIC QUESTIONS (After Getting Q1 Right)
Question #1 must come first: “What factors OUTSIDE your process influence outcomes?”
Not: “What’s your process?”
But: “What time do orders come in?” (or equivalent behavioral insight question)
Only then can you ask:
2. Process Maturity: Can you document current-state end-to-end with exceptions?
(If you can’t map reality, you can’t improve it)
3. Data Quality & Ownership: Who owns master data? What’s the error rate?
(If spend data is 70% categorized, ML/AI amplifies garbage)
4. Stakeholder Alignment: Do Finance, Operations, IT agree on success?
(DND: Service dept success ≠ Procurement success)
5. Change Capacity: Team bandwidth? Who’s accountable for adoption?
(If already at 120% capacity, new system = failure)
6. Governance: Who decides when system conflicts with business logic?
(Without clear escalation, customization becomes a virus)
The Hansen Fit Score measures these across 23 dimensions.
But dimension #1—understanding behavioral drivers before mapping processes—is the foundation.
Get that wrong, and the other 22 dimensions are built on poisoned fruit.
THE PATTERN ACROSS 27 YEARS
Since 1998, I’ve tracked the same pattern across six technology waves:
Traditional AI (1998) → Deployed rule-based systems without assessing process maturity
RPA (2007) → Automated broken processes, created automated chaos
ML/Predictive (2010) → Deployed without data literacy or quality assessment
Gen-AI (2023) → ChatGPT frenzy, deployed before governance
Platform Architecture (2025) → Choosing platforms without behavioral readiness
Same lesson every time: Technology capability ≠ Organizational readiness
Same failure mode: Wrong Q1 → Everything downstream is corrupted
REAL-WORLD EXAMPLES: GETTING Q1 RIGHT VS. WRONG
✅ Virginia (2013): Asked the Right Questions
Context:
- Another department wanted to deploy PeopleSoft ERP procurement module
- Leadership paused: “Are we ready for this?”
Action:
- Brought in Forrester to assess behavioral readiness
- Forrester asked behavioral/organizational questions (not just process)
- Assessment: “You’re not ready. Don’t deploy.”
Decision:
- Virginia killed the project
Result:
- Saved millions
- Avoided catastrophic failure
- Waited until organizational readiness improved
Why it worked: Asked Q1 correctly BEFORE deployment
🔴 Daedong (October 2025): Asked the Wrong Questions (or Skipped Q1)
Context:
- Deployed procurement technology
- Understood what the platform could do (Layer 1) ✅
What was missing:
- Didn’t assess organizational readiness (Layer 2) ❌
- Didn’t ask “what factors outside process influence outcomes?” ❌
- Didn’t map behavioral drivers ❌
- Deployed based on capability, not readiness ❌
Result:
- $11.4 billion lawsuit
- Failed implementation
- October 2025 headlines
Why it failed: Wrong Q1 (or no Q1) → Fruit of poisonous tree
THE COMPLETE PICTURE: LAYER 1 + LAYER 2
Platform architecture thinking is necessary (Layer 1).
Thought leaders like Joël Collin-Demers, Kelly Barner, Michael Lamoureux, Tim Cummins, and others are each advancing this conversation in valuable ways.
Joël’s framework is excellent:
- Suites own everything
- Platforms orchestrate everything
- Architecture matters
This is essential thinking.
But platform architecture becomes sufficient only when combined with behavioral readiness assessment (Layer 2).
The DND case proves it:
- Platform orchestration alone = would have failed at 51%
- Platform orchestration + behavioral readiness = 97.3% success
WHAT THE HANSEN FIT SCORE MEASURES
23 dimensions of organizational readiness, including:
- Behavioral Drivers (Q1: What factors outside process influence outcomes?)
- Process Maturity (Can you document reality, not just design?)
- Data Quality & Literacy (Who owns it? What’s the error rate?)
- Stakeholder Alignment (Do departments define success the same way?)
- Change Management Capacity (Bandwidth and accountability?)
- Cultural Readiness (Resistance patterns? Change fatigue?)
- Leadership Alignment (Do executives agree on objectives?)
- Technical Capability Gaps (Can your team execute?)
- Governance Structure (Who decides when conflicts arise?)
- Cross-System Dependencies (What other agents influence outcomes?)
…and 13 more dimensions
THE SEQUENCE THAT WORKS
1. Assess readiness FIRST (Hansen Fit Score, starting with Q1)
2. Choose architecture (Suite vs. Platform based on context AND readiness)
3. Match deployment to readiness level (Iterative, not all-at-once)
4. Reassess continuously (Organizations evolve, readiness changes)
Result: 80% success rate vs. industry 20-30%
WHY THIS MATTERS NOW
The market is finally recognizing what the Metaprise Model predicted in 1998: orchestrated platforms outperform monolithic suites.
That’s progress.
But we can’t repeat the mistakes of previous technology waves—deploying based on capability without assessing readiness.
Layer 1 without Layer 2 still fails 70-95% of the time.
Both required. Neither is sufficient alone.
THE INDUSTRY OPPORTUNITY
There’s room for everyone in this conversation:
Platform architecture thought leaders (Joël, Kelly, Michael, Tim, others):
Advancing Layer 1 (technology capability)
Behavioral readiness practitioners (October Diaries, Hansen Fit Score):
Advancing Layer 2 (organizational capability)
Together:
Complete solution for sustainable transformation
As I told Kelly Barner: there’s enough room for everyone to share in both their individual and the industry’s collective success.
The more voices advancing BOTH layers—starting with the right Question #1—the better the outcomes for everyone.
THE CALL TO ACTION
Before your next platform deployment:
Ask Question #1: “What factors OUTSIDE our procurement process influence outcomes?”
Not: “What’s our process?”
Look for behavioral drivers:
- What time do orders come in? (timing patterns)
- Why that time? (incentive structures)
- Who are the other agents? (cross-system dependencies)
- What are their success metrics? (alignment or conflict?)
- What hidden strands connect seemingly disparate issues? (strand commonality)
Then ask the follow-up questions (process maturity, data quality, stakeholder alignment, etc.)
But get Q1 right first.
Because if you get Q1 wrong, everything else is fruit of the poisonous tree.
What’s your experience?
Have you seen implementations fail because Question #1 was wrong?
Have you seen the “what time do orders come in?” equivalent in your organization—the behavioral insight that process mapping missed?
Share your story in the comments. Let’s learn from each other.
Join the conversation on LinkedIn: Michael Lamoureux’s discussion about what questions buyers should ask
More on the Evolution Trap and behavioral readiness assessment:
The AI Evolution Trap: Why Capability Stacks Force Organizations Into Predictable Frames
30
Why Platform Architecture Fails Without Asking the Right Question #1
Posted on October 28, 2025
0
I’ve been having valuable conversations this week with Michael Lamoureux, Bob Ferrari, and Joël Collin-Demers about why procurement transformations fail.
The consensus: Platform architecture thinking (Suites vs. Platforms) is advancing. This is excellent for the industry.
What’s missing: Asking the right Question #1 before deployment.
Because if you get Q1 wrong, everything else is fruit of the poisonous tree.
THE WRONG QUESTION #1 (Process-Focused)
Most consultants, analysts, and technology providers ask:
“What’s your ordering process?”
This seems logical. Map the process as designed:
Requisition → Approval → PO → Supplier → Delivery
Then choose the right platform architecture, deploy best-of-breed modules, orchestrate beautifully across systems.
And you still fail 70-95% of the time.
Why?
Because the process map doesn’t reveal the behavioral drivers and cross-system dependencies that determine actual outcomes.
You’ve automated the DESIGNED process, not the ACTUAL system.
THE RIGHT QUESTION #1 (Behavioral-Focused)
“What time of day do orders come in?”
This sounds like a strange question. What does timing have to do with platform architecture?
Everything.
CASE STUDY: DEPARTMENT OF NATIONAL DEFENSE (Late 1990s)
The Stated Problem:
Standard Approach (Wrong Q1):
A traditional consultant would have asked: “What’s your ordering process?”
They would have mapped:
Then they would have said: “You need a platform to orchestrate this process across systems.”
They would have deployed technology.
Result: Still 51% delivery.
Why? Because the process map was a lie.
WHAT “WHAT TIME DO ORDERS COME IN?” REVEALED
Answer: Most orders came in at 4:00 PM.
That single data point exposed the entire ecosystem of failure:
Agent 1: Service Department Technicians
Agent 2: Dynamic Flux Pricing Window
Agent 3: Small/Medium Suppliers
Agent 4: Customs Clearance Process
Agent 5: Courier Dispatch
The entire ecosystem of failure was invisible to a process map.
No platform architecture—suite or orchestrated—would have fixed this without understanding these behavioral drivers.
FRUIT OF THE POISONOUS TREE
In legal doctrine, if initial evidence is obtained illegally, everything derived from it is inadmissible. The foundation is corrupted, so the entire case collapses.
In procurement transformation:
Wrong Q1 (“What’s your process?”)
↓
Wrong process map (shows designed flow, not actual behavior)
↓
Wrong platform configuration (automates the map, not reality)
↓
Wrong deployment (technology works as designed but fails in practice)
↓
70-95% failure rate
The foundation was poisoned from the start.
THE RIGHT APPROACH: ADDRESSING THE ECOSYSTEM
Once we understood the behavioral drivers, the solution became clear:
For Service Technicians (Agent 1):
Created a system so easy to use that ordering after each call became EASIER than sandbagging:
For Pricing Windows (Agent 2):
Built self-learning algorithms (yes, in the late 1990s) that:
For Suppliers (Agent 3):
Made it ridiculously easy:
For Customs (Agent 4):
For Courier (Agent 5):
THE RESULTS
3 months: 51% → 97.3% next-day delivery
7 years: 23% sustained cost reduction (dynamic flux pricing captured)
18 months: Supplier base organically rationalized from 23 companies to 3 (not forced consolidation—natural market efficiency when barriers removed)
Service department: Call close rates improved dramatically as parts began arriving consistently
Procurement department: Met contract requirements, avoided losing the contract
Watch the video walkthrough of the DND case
WHAT IF WE’D ASKED THE WRONG Q1?
Scenario: Deploy platform architecture without asking “what time do orders come in?”
Result:
Why?
We would have automated failure at 51%.
That’s fruit of the poisonous tree.
THE FOUNDATION (1998): METAPRISE MODEL
This wasn’t an accident. I documented this architectural shift in the Metaprise Model (1998)—moving from centralized, rigid ERP systems to decentralized, adaptive network architectures.
The core insight then is the same as today:
Organizations succeed when they orchestrate across ecosystems, not when they force everything through monolithic ownership.
But architecture alone doesn’t determine success.
The DND case proves it:
vs.
THE MISSING LAYER IN PLATFORM DISCUSSIONS
Layer 1 (Platform Architecture): What technology can do
Layer 2 (Behavioral Readiness): Whether your organization can execute
Most platform discussions focus only on Layer 1.
They answer:
They don’t answer:
That’s where the Hansen Fit Score comes in.
THE DIAGNOSTIC QUESTIONS (After Getting Q1 Right)
Question #1 must come first: “What factors OUTSIDE your process influence outcomes?”
Not: “What’s your process?”
But: “What time do orders come in?” (or equivalent behavioral insight question)
Only then can you ask:
2. Process Maturity: Can you document current-state end-to-end with exceptions?
(If you can’t map reality, you can’t improve it)
3. Data Quality & Ownership: Who owns master data? What’s the error rate?
(If spend data is 70% categorized, ML/AI amplifies garbage)
4. Stakeholder Alignment: Do Finance, Operations, IT agree on success?
(DND: Service dept success ≠ Procurement success)
5. Change Capacity: Team bandwidth? Who’s accountable for adoption?
(If already at 120% capacity, new system = failure)
6. Governance: Who decides when system conflicts with business logic?
(Without clear escalation, customization becomes a virus)
The Hansen Fit Score measures these across 23 dimensions.
But dimension #1—understanding behavioral drivers before mapping processes—is the foundation.
Get that wrong, and the other 22 dimensions are built on poisoned fruit.
THE PATTERN ACROSS 27 YEARS
Since 1998, I’ve tracked the same pattern across six technology waves:
Traditional AI (1998) → Deployed rule-based systems without assessing process maturity
RPA (2007) → Automated broken processes, created automated chaos
ML/Predictive (2010) → Deployed without data literacy or quality assessment
Gen-AI (2023) → ChatGPT frenzy, deployed before governance
Platform Architecture (2025) → Choosing platforms without behavioral readiness
Same lesson every time: Technology capability ≠ Organizational readiness
Same failure mode: Wrong Q1 → Everything downstream is corrupted
REAL-WORLD EXAMPLES: GETTING Q1 RIGHT VS. WRONG
✅ Virginia (2013): Asked the Right Questions
Context:
Action:
Decision:
Result:
Why it worked: Asked Q1 correctly BEFORE deployment
🔴 Daedong (October 2025): Asked the Wrong Questions (or Skipped Q1)
Context:
What was missing:
Result:
Why it failed: Wrong Q1 (or no Q1) → Fruit of poisonous tree
THE COMPLETE PICTURE: LAYER 1 + LAYER 2
Platform architecture thinking is necessary (Layer 1).
Thought leaders like Joël Collin-Demers, Kelly Barner, Michael Lamoureux, Tim Cummins, and others are each advancing this conversation in valuable ways.
Joël’s framework is excellent:
This is essential thinking.
But platform architecture becomes sufficient only when combined with behavioral readiness assessment (Layer 2).
The DND case proves it:
WHAT THE HANSEN FIT SCORE MEASURES
23 dimensions of organizational readiness, including:
…and 13 more dimensions
THE SEQUENCE THAT WORKS
1. Assess readiness FIRST (Hansen Fit Score, starting with Q1)
2. Choose architecture (Suite vs. Platform based on context AND readiness)
3. Match deployment to readiness level (Iterative, not all-at-once)
4. Reassess continuously (Organizations evolve, readiness changes)
Result: 80% success rate vs. industry 20-30%
WHY THIS MATTERS NOW
The market is finally recognizing what the Metaprise Model predicted in 1998: orchestrated platforms outperform monolithic suites.
That’s progress.
But we can’t repeat the mistakes of previous technology waves—deploying based on capability without assessing readiness.
Layer 1 without Layer 2 still fails 70-95% of the time.
Both required. Neither is sufficient alone.
THE INDUSTRY OPPORTUNITY
There’s room for everyone in this conversation:
Platform architecture thought leaders (Joël, Kelly, Michael, Tim, others):
Advancing Layer 1 (technology capability)
Behavioral readiness practitioners (October Diaries, Hansen Fit Score):
Advancing Layer 2 (organizational capability)
Together:
Complete solution for sustainable transformation
As I told Kelly Barner: there’s enough room for everyone to share in both their individual and the industry’s collective success.
The more voices advancing BOTH layers—starting with the right Question #1—the better the outcomes for everyone.
THE CALL TO ACTION
Before your next platform deployment:
Ask Question #1: “What factors OUTSIDE our procurement process influence outcomes?”
Not: “What’s our process?”
Look for behavioral drivers:
Then ask the follow-up questions (process maturity, data quality, stakeholder alignment, etc.)
But get Q1 right first.
Because if you get Q1 wrong, everything else is fruit of the poisonous tree.
What’s your experience?
Have you seen implementations fail because Question #1 was wrong?
Have you seen the “what time do orders come in?” equivalent in your organization—the behavioral insight that process mapping missed?
Share your story in the comments. Let’s learn from each other.
Join the conversation on LinkedIn: Michael Lamoureux’s discussion about what questions buyers should ask
More on the Evolution Trap and behavioral readiness assessment:
The AI Evolution Trap: Why Capability Stacks Force Organizations Into Predictable Frames
30
Share this:
Related