Getting the architecture right doesn’t matter if you’re developing the wrong humans.
Sol Rashidi posted something important this week about why AI projects stall between proof-of-concept and scaled success.
Her Executive AI Compass framework addresses the critical architectural question: Should this be “AI-led, human-supported” or “human-led, AI-supported”?
It’s the right question. Organizations need strategic clarity on WHERE to place AI in their operations.
But three recent case studies—Deloitte’s partial refund of a $440,000 contract, Accenture’s reskill-or-exit mandate, and KPMG’s 22 Skills for 2030—reveal a parallel crisis that determines whether those architectures will ever work.
The crisis: We’re developing the wrong humans to operate them.
The Three Case Studies
Deloitte: When AI Architecture Meets Operational Incompetence
In October 2025, Deloitte refunded part of a $440,000 government contract after delivering a report riddled with fake citations, fabricated legal quotes, and basic errors.
They had smart people. They had AI technology. They had a lucrative contract.
What they didn’t have was the operational fluency to ask: “Where’s the evidence for this conclusion?”
The pattern: AI was used as an answer machine instead of a diagnostic tool. Outputs were taken at face value instead of being verified by skilled professionals.
Sol’s framework would have correctly positioned this as “human-led, AI-supported”—humans should verify government reports, AI should assist research.
But the humans operating within that architecture lacked the verification discipline to make it work.
[Full Deloitte analysis here: https://bit.ly/3IOnFTC]
Accenture: The “Reskill-or-Exit” Reality Check
In September 2025, reports surfaced that Accenture would “exit” workers who can’t be retrained for AI collaboration as the company pushes to triple its GenAI revenue.
This isn’t theoretical. This is the market forcing a hard pivot.
The question: Reskill for WHAT exactly?
- Tool skills? (How to use AI platforms)
- Prompt engineering? (How to write better queries)
- Cross-functional business acumen? (How to speak the C-suite language)
What Accenture’s urgency reveals: Companies are betting everything on AI adoption without clarity on what human capabilities make that adoption successful.
They’re optimizing for speed without defining the destination.
[Full Accenture analysis here: https://bit.ly/3ZBJlD9]
KPMG: The 2030 Skills Framework That Repeats 2007 Mistakes
In September 2025, KPMG’s Tanya Ward published a framework outlining 22 procurement skills for 2030—emphasizing AI collaboration, ESG integration, and cross-functional business skills.
The problem? It’s the exact “assimilation vs. expansion” mistake I warned about in 2007.
From my 2007 article:
“There is a significant difference between expansion and assimilation. Expansion acknowledges that purchasing can have a broader role while still recognizing its unique operating framework. Assimilation tends to view procurement as an adjunct of a core practice.”
Ward’s framework positions procurement professionals as generalists who master technology tools rather than developing indigenous procurement capabilities.
Traditional procurement expertise—vendor behavioral pattern recognition, implementation failure prediction, procurement-specific root cause analysis—gets relegated to the “out of focus” zone.
The 2007 warning coming true: “When professionals prioritize generalist skills and become mere users of technology, they risk being the ones ‘exited’ by the AI they are meant to be ‘collaborating’ with.”
Eighteen years later, that theoretical risk has become Accenture’s operational reality.
[Full KPMG analysis here: https://bit.ly/3NW8xJm]
The Connection: Architecture Without Operators
Here’s how these three cases connect to Sol’s Executive AI Compass:
Sol’s framework tells organizations:
- This task should be “AI-led, human-supported”
- That task should be “human-led, AI-supported”
Current skills development tells professionals:
- Learn to use AI tools (command-driven engagement)
- Master cross-functional business skills (generalist approach)
- Technology will handle complexity (automation thinking)
What’s missing: The indigenous expertise and conversational fluency needed to actually verify, validate, and intervene within those architectures.
Sol is solving for WHERE AI should fit.
The industry hasn’t solved how the WHO can make it work.
The Pattern: Assimilation Thinking Applied to AI
The same equation-based thinking that’s produced 70-80% failure rates in procurement technology for 25 years is now being applied to AI adoption:
- Deploy technology first
- Assume capability
- Skip readiness assessment
- Train on tools, not thinking
- Wonder why it fails
Deloitte’s failure wasn’t about the AI model. It was about humans trained to accept outputs rather than verify them.
Accenture’s urgency isn’t about AI features. It’s about humans who can’t distinguish between using AI and collaborating with AI.
KPMG’s framework isn’t about future readiness. It’s about positioning procurement professionals as generalists, just as AI makes generalist skills most replaceable.
The Solution: Indigenous Expertise + Conversational Fluency
Sol’s Executive AI Compass is correct: organizations need architectural clarity on AI placement.
But those architectures require humans with two capabilities current training ignores:
1. Indigenous Domain Expertise
Deep, native capabilities that AI can’t easily replicate:
- Pattern recognition from experience
- Implementation failure prediction
- Root cause analysis specific to the domain
- Vendor behavioral understanding
- Risk identification through expertise (not monitoring through tools)
The Deloitte lesson: Without domain expertise, humans can’t verify AI outputs. They don’t know what “right” looks like.
2. Conversational AI Fluency
Bilateral engagement that creates verification loops:
- Questioning outputs rather than accepting them
- Asking for evidence and sources
- Challenging assumptions
- Refining through dialogue
- Learning from corrections
The Accenture lesson: Tool skills aren’t enough. Thinking skills—grounded in expertise—determine who becomes indispensable vs. exitable.
The “Amen Hallucination” as Proof Point
This week, I published a case study about a fabricated word—”Amen”—that one of my AI models inserted into a colleague’s comment. It fit the tone perfectly. It wasn’t there.
I caught it by asking: “Where did you read ‘Amen’?”
That single question—that conversational loop—caught the hallucination in seconds.
Command-driven usage (ask → receive → accept) would have let it propagate into my response, my strategy, my client communication.
This is what’s missing from current skills frameworks: The discipline to question, verify, and validate—grounded in expertise deep enough to recognize when something’s wrong.
[Read the full case study: https://bit.ly/4hmle7q]
Why This Matters for Sol’s Framework
Sol’s Executive AI Compass helps organizations decide:
- Where AI should lead
- Where humans should intervene
- How to align both for scalable, responsible growth
But “human intervention” assumes humans with:
- Expertise to recognize when intervention is needed
- Fluency to verify AI outputs conversationally
- Discipline to question rather than accept
Current skills development produces humans trained to:
- Use tools (not verify outputs)
- Speak business language (not exercise domain expertise)
- Collaborate with AI (without defining what that means operationally)
Result: Perfectly architected “AI-led, human-supported” systems operated by humans trained to be generalists who accept outputs at face value.
That’s Deloitte’s mistake at scale.
The Complementary Frameworks
Sol Rashidi’s Executive AI Compass and conversational AI fluency aren’t competing methodologies—they’re complementary layers:
Strategic Layer (Sol’s Compass):
- Architectural decisions
- AI placement clarity
- Role definition
- Scalable growth planning
Operational Layer (Conversational AI Fluency):
- Execution methodology
- Verification discipline
- Bilateral learning loops
- Indigenous expertise application
Sol shows you WHERE AI should fit.
Conversational AI fluency shows you HOW to work within that architecture effectively.
Organizations need both.
The Urgent Question
Sol’s framework forces the right question: “Should this be AI-led or human-led?”
But there’s a prerequisite question current skills frameworks ignore:
“Do we have humans capable of operating within either architecture?”
Because if your humans are trained as:
- Tool users rather than expert practitioners
- Generalists rather than domain specialists
- Output acceptors rather than verification agents
Then it doesn’t matter how correctly you architect the AI placement.
The humans will fail at their assigned role—whether that’s leading, supporting, verifying, or intervening.
The Path Forward
For organizations implementing Sol’s Executive AI Compass:
1. Architect correctly (Sol’s framework)
- Map use cases by AI placement
- Define clear human-AI roles
- Align for responsible growth
2. Develop correctly (Conversational AI fluency)
- Train indigenous expertise first
- Build verification discipline
- Practice bilateral engagement
- Measure by outcomes, not tool usage
3. Validate continuously
- Runtime evidence that skills change outcomes
- Error detection and correction rates
- Time-to-intervention metrics
- Override frequency and accuracy
The Bottom Line
Sol Rashidi’s Executive AI Compass is exactly what organizations need for architectural clarity.
But three case studies—Deloitte’s contract failure, Accenture’s urgency, KPMG’s framework—reveal the missing layer:
Architecture decisions don’t matter if you’re developing the wrong humans to operate them.
Current skills development is producing:
- Generalists in an age requiring specialists
- Tool users in architectures requiring expert practitioners
- Output acceptors in systems requiring verification agents
The fix isn’t to reject cross-functional skills or AI collaboration.
The fix is to anchor them to indigenous expertise and conversational fluency—the capabilities that make humans indispensable within Sol’s architectures rather than replaceable by them.
Sol’s framework shows WHERE AI fits.
We need parallel frameworks for developing WHO makes it work.
Because in AI implementation, how you develop the humans matters just as much as how you architect the technology.
Jon Hansen, Creator, Hansen Method | RAM 2025 Author, The October Diaries
📘 The October Diaries – Available Here
30
Sol Rashidi’s Executive AI Compass Exposes The Missing Layer: Why AI Architecture Fails Without Indigenous Expertise
Posted on October 24, 2025
0
Getting the architecture right doesn’t matter if you’re developing the wrong humans.
Sol Rashidi posted something important this week about why AI projects stall between proof-of-concept and scaled success.
Her Executive AI Compass framework addresses the critical architectural question: Should this be “AI-led, human-supported” or “human-led, AI-supported”?
It’s the right question. Organizations need strategic clarity on WHERE to place AI in their operations.
But three recent case studies—Deloitte’s partial refund of a $440,000 contract, Accenture’s reskill-or-exit mandate, and KPMG’s 22 Skills for 2030—reveal a parallel crisis that determines whether those architectures will ever work.
The crisis: We’re developing the wrong humans to operate them.
The Three Case Studies
Deloitte: When AI Architecture Meets Operational Incompetence
In October 2025, Deloitte refunded part of a $440,000 government contract after delivering a report riddled with fake citations, fabricated legal quotes, and basic errors.
They had smart people. They had AI technology. They had a lucrative contract.
What they didn’t have was the operational fluency to ask: “Where’s the evidence for this conclusion?”
The pattern: AI was used as an answer machine instead of a diagnostic tool. Outputs were taken at face value instead of being verified by skilled professionals.
Sol’s framework would have correctly positioned this as “human-led, AI-supported”—humans should verify government reports, AI should assist research.
But the humans operating within that architecture lacked the verification discipline to make it work.
[Full Deloitte analysis here: https://bit.ly/3IOnFTC]
Accenture: The “Reskill-or-Exit” Reality Check
In September 2025, reports surfaced that Accenture would “exit” workers who can’t be retrained for AI collaboration as the company pushes to triple its GenAI revenue.
This isn’t theoretical. This is the market forcing a hard pivot.
The question: Reskill for WHAT exactly?
What Accenture’s urgency reveals: Companies are betting everything on AI adoption without clarity on what human capabilities make that adoption successful.
They’re optimizing for speed without defining the destination.
[Full Accenture analysis here: https://bit.ly/3ZBJlD9]
KPMG: The 2030 Skills Framework That Repeats 2007 Mistakes
In September 2025, KPMG’s Tanya Ward published a framework outlining 22 procurement skills for 2030—emphasizing AI collaboration, ESG integration, and cross-functional business skills.
The problem? It’s the exact “assimilation vs. expansion” mistake I warned about in 2007.
From my 2007 article:
Ward’s framework positions procurement professionals as generalists who master technology tools rather than developing indigenous procurement capabilities.
Traditional procurement expertise—vendor behavioral pattern recognition, implementation failure prediction, procurement-specific root cause analysis—gets relegated to the “out of focus” zone.
The 2007 warning coming true: “When professionals prioritize generalist skills and become mere users of technology, they risk being the ones ‘exited’ by the AI they are meant to be ‘collaborating’ with.”
Eighteen years later, that theoretical risk has become Accenture’s operational reality.
[Full KPMG analysis here: https://bit.ly/3NW8xJm]
The Connection: Architecture Without Operators
Here’s how these three cases connect to Sol’s Executive AI Compass:
Sol’s framework tells organizations:
Current skills development tells professionals:
What’s missing: The indigenous expertise and conversational fluency needed to actually verify, validate, and intervene within those architectures.
Sol is solving for WHERE AI should fit.
The industry hasn’t solved how the WHO can make it work.
The Pattern: Assimilation Thinking Applied to AI
The same equation-based thinking that’s produced 70-80% failure rates in procurement technology for 25 years is now being applied to AI adoption:
Deloitte’s failure wasn’t about the AI model. It was about humans trained to accept outputs rather than verify them.
Accenture’s urgency isn’t about AI features. It’s about humans who can’t distinguish between using AI and collaborating with AI.
KPMG’s framework isn’t about future readiness. It’s about positioning procurement professionals as generalists, just as AI makes generalist skills most replaceable.
The Solution: Indigenous Expertise + Conversational Fluency
Sol’s Executive AI Compass is correct: organizations need architectural clarity on AI placement.
But those architectures require humans with two capabilities current training ignores:
1. Indigenous Domain Expertise
Deep, native capabilities that AI can’t easily replicate:
The Deloitte lesson: Without domain expertise, humans can’t verify AI outputs. They don’t know what “right” looks like.
2. Conversational AI Fluency
Bilateral engagement that creates verification loops:
The Accenture lesson: Tool skills aren’t enough. Thinking skills—grounded in expertise—determine who becomes indispensable vs. exitable.
The “Amen Hallucination” as Proof Point
This week, I published a case study about a fabricated word—”Amen”—that one of my AI models inserted into a colleague’s comment. It fit the tone perfectly. It wasn’t there.
I caught it by asking: “Where did you read ‘Amen’?”
That single question—that conversational loop—caught the hallucination in seconds.
Command-driven usage (ask → receive → accept) would have let it propagate into my response, my strategy, my client communication.
This is what’s missing from current skills frameworks: The discipline to question, verify, and validate—grounded in expertise deep enough to recognize when something’s wrong.
[Read the full case study: https://bit.ly/4hmle7q]
Why This Matters for Sol’s Framework
Sol’s Executive AI Compass helps organizations decide:
But “human intervention” assumes humans with:
Current skills development produces humans trained to:
Result: Perfectly architected “AI-led, human-supported” systems operated by humans trained to be generalists who accept outputs at face value.
That’s Deloitte’s mistake at scale.
The Complementary Frameworks
Sol Rashidi’s Executive AI Compass and conversational AI fluency aren’t competing methodologies—they’re complementary layers:
Strategic Layer (Sol’s Compass):
Operational Layer (Conversational AI Fluency):
Sol shows you WHERE AI should fit.
Conversational AI fluency shows you HOW to work within that architecture effectively.
Organizations need both.
The Urgent Question
Sol’s framework forces the right question: “Should this be AI-led or human-led?”
But there’s a prerequisite question current skills frameworks ignore:
“Do we have humans capable of operating within either architecture?”
Because if your humans are trained as:
Then it doesn’t matter how correctly you architect the AI placement.
The humans will fail at their assigned role—whether that’s leading, supporting, verifying, or intervening.
The Path Forward
For organizations implementing Sol’s Executive AI Compass:
1. Architect correctly (Sol’s framework)
2. Develop correctly (Conversational AI fluency)
3. Validate continuously
The Bottom Line
Sol Rashidi’s Executive AI Compass is exactly what organizations need for architectural clarity.
But three case studies—Deloitte’s contract failure, Accenture’s urgency, KPMG’s framework—reveal the missing layer:
Architecture decisions don’t matter if you’re developing the wrong humans to operate them.
Current skills development is producing:
The fix isn’t to reject cross-functional skills or AI collaboration.
The fix is to anchor them to indigenous expertise and conversational fluency—the capabilities that make humans indispensable within Sol’s architectures rather than replaceable by them.
Sol’s framework shows WHERE AI fits.
We need parallel frameworks for developing WHO makes it work.
Because in AI implementation, how you develop the humans matters just as much as how you architect the technology.
Jon Hansen, Creator, Hansen Method | RAM 2025 Author, The October Diaries
📘 The October Diaries – Available Here
30
Share this:
Related