Getting It Right Rather Than Being Right: How Hackett Group’s 2025 Research Validates 27 Years of Readiness-First Evidence

Posted on November 19, 2025

0


Introduction: A Question of Focus

With a focus on getting it right rather than being right, I’ve spent 27 years challenging the “readiness-first” thesis through government-funded scientific research, systematic tracking of practitioner outcomes, and continuous pattern documentation across technology waves—from e-procurement platforms to artificial intelligence.

The thesis has held consistently: Organizations that assess readiness before technology deployment achieve 85-95% success rates. Organizations that skip Phase 0 assessment experience 60-80% failure rates.

In October 2025, The Hackett Group published research that independently validates what the evidence has shown for nearly three decades. Their paper, “From Risk to Readiness: The Executive Playbook for Scaling AI in Procurement,” represents an important milestone—not because it proves I was right, but because it demonstrates the pattern is undeniable.

When a major consulting firm arrives at the same conclusions through independent observation, it’s time to ask: Why did it take 27 years? And what can we do differently now?


The Hackett Group’s 2025 Findings: Validation of the Readiness Gap

The Statistics That Tell The Story

The Hackett Group’s research reveals what practitioners have experienced but what the industry has been reluctant to acknowledge:

  • Only 53% of procurement teams have adopted AI—and only 4% on a large-scale basis
  • Only 2% of AI adopters report results exceeding expectations
  • More than 50% believe AI has fallen short of promises
  • AI readiness rated lowest in maturity among all 2025 procurement improvement initiatives by a significant margin
  • 64% of executives expect AI to be transformational, but few are prepared

Their Core Finding

“Successful procurement organizations don’t simply adopt AI. They reimagine specific processes and work steps with how AI can assist, augment, and potentially autonomously move forward on actions and decisions traditionally left to knowledge workers.”

Translation: Technology capability ≠ transformation success. Organizational readiness = the determining variable.

Hackett’s Six Pillars of AI Readiness

The research identifies six critical dimensions:

  1. Strategy & Leadership: Readiness of procurement leadership to take risks, prioritize ideas, and engage resources to ideate on opportunities for new performance advantage
  2. Governance & Ethics: Readiness of the organization’s actions and decision-making to shape how it governs individual solutions and responsible/ethical use of AI
  3. Organization, Culture & Talent: Readiness of the procurement function’s ability to enable its talent to evolve and work with and alongside AI to deliver new performance advantage
  4. Technology Enablement: Readiness of the organization’s enabling technology ecosystem, practices, and talent to support integration of AI across procurement processes
  5. Data Management & Architecture: Readiness of the organization’s ability to manage and utilize quality data and mitigate data risks in procurement’s AI solutions
  6. AI Enablement: Readiness of resources, actions, and decisions to establish and continuously manage scalable, reliable, and cost-effective AI operations

Why This Matters

This is significant. A major consulting firm with global reach is now telling the industry what the evidence has shown for decades: technology capability is necessary but not sufficient for transformation success.

The question organizations must answer before deployment isn’t “Which AI platform should we select?” but rather “Are we organizationally ready to leverage AI effectively?


The 27-Year Foundation: From RAM 1998 to Hansen Fit Score 2025

Where The Evidence Began

In 1998, I received government funding through Canada’s Scientific Research & Experimental Development (SR&ED) program to build the Relational Acquisition Model (RAM) for Canada’s Department of National Defence.

The Challenge: 51% next-day delivery accuracy on critical parts was causing technicians to delay service calls, impacting military readiness and customer satisfaction.

The Approach: Instead of deploying technology first, I asked a seemingly simple question: “What time of day do orders come in?”

That question revealed seven invisible strands that collectively impacted outcomes:

  1. Service department behavior (sandbagging orders to hit daily targets)
  2. Dynamic pricing flux ($900 at 9 AM → $1,000 at 4 PM)
  3. Supplier geographic location (US-based = customs requirements)
  4. Customs clearance timing
  5. Courier coordination (UPS integration needs)
  6. Call close rates (impacted by part arrival times)
  7. Overall system performance

The Result: 51% → 97.3% next-day delivery accuracy in 3 months.

But here’s what mattered more than the technology: The success came from understanding organizational behavior patterns, supplier dynamics, and process interdependencies BEFORE building the system that would respect those realities.

The RAM platform enabled the results. The organizational readiness work delivered them.

Pattern Consistency Across 27 Years

From 1998 to 2025, I’ve tracked transformation outcomes across technology waves:

2007 – Virginia eVA (E-Procurement): As I documented in my September 2007 article “Yes, Virginia”:

“They avoided the trap of eVA becoming a software project… and thereby shifted the emphasis from an exercise in cost justification to one of process understanding and refinement. While the Ariba application has done the job it was required to do, eVA’s effectiveness has little to do with the technology and more to do with the methodology the Virginia brain trust employed.”

2007 – Mendocino/DUET Failure: Microsoft and SAP’s integration project to hide SAP complexity behind familiar Office interfaces. Despite massive investment, it failed because:

  • Familiar interface ≠ organizational readiness
  • Ease of use ≠ behavioral alignment
  • Technology layer ≠ transformation architecture

2010s – Procurement Platform Disappointments: Catalog-based systems, self-service BI platforms, digital transformation initiatives—consistent 60-80% failure rates when organizations deployed capability without readiness assessment.

2023 – Retailer Vendor Rationalization: Client with “perfect strategy-culture alignment” (according to consultants) paying 23% premium over market within 2 years of aggressive supplier base compression. Why? Organizational readiness assessment was skipped.

2025 – o9 Solutions Case Studies: Recent analysis of AI planning platform case studies revealed the same pattern: Success attributed to “AI magic” actually came from distribution restructuring (EchoStar), 3,000-location standardization (Zamp), and global system integration (New Balance). The platform enabled. The organizational readiness work delivered.

The Universal Pattern

What changed: Technology buzzword (e-procurement → ERP → catalogs → BI → AI)

What didn’t change: Organizations skipping readiness assessment experienced 60-80% failure rates, while those conducting Phase 0 evaluation achieved 85-95% success rates.

The pattern has been consistent, documented, and reproducible for 27 years.


The Convergence: How Hackett’s Framework Aligns With Hansen Fit Score

Mapping The Dimensions

When I reviewed Hackett’s six pillars, the alignment was immediate:

We’re measuring the same thing with slightly different labels.

Independent Validation

This convergence is significant because:

  1. Not derivative: Hackett didn’t adopt my framework—they arrived at the same conclusions through independent client observation
  2. Pattern undeniability: When separate researchers analyzing the same problem reach the same conclusion, the pattern is real
  3. Market readiness: Major consulting firms validating readiness-first thinking suggests the industry may finally be ready to adopt it

What This Tells Us

The 60-80% failure rate is so persistent and observable that any serious researcher analyzing transformation patterns will identify organizational readiness as the missing variable.

I got there in 1998 through government-funded research.
The Hackett Group got there in 2025 through client observations.

The question isn’t who was first. The question is: Why did it take the industry 27 years to recognize what the evidence consistently showed?


How Hackett’s Four Readiness Levels Map to Hansen Fit Score Bands

Hackett’s framework identifies four organizational maturity levels. These align directly with Hansen Fit Score probability bands—but with a critical difference: HFS provides go/no-go thresholds, not just descriptive categories.

The Critical Difference

Hackett describes where organizations are on the maturity curve with qualitative characteristics.

Hansen Fit Score quantifies the probability of success at each level and explicitly recommends when NOT to proceed.

This distinction matters enormously:

  • Hackett’s implication: Everyone should move along the curve toward “Pioneering” by following the three essential steps (Assess, Explore, Understand impact on people)
  • Hansen Fit Score’s explicit guidance: “If you’re at HFS 47/100 (‘Adopting’ level), deploying AI now creates 60-70% failure probability. You have eight critical gaps. Here’s the 4-6 month remediation roadmap. Deploy after reaching HFS 70+, not before.”

This isn’t pessimism—it’s probability-based decision support validated by 27 years of pattern documentation and now independently confirmed by Hackett’s own findings: only 2% of AI adopters exceed expectations because most organizations deploy at “Initiating” or “Adopting” levels when they should be remediating first.

The readiness levels converge. The go/no-go thresholds differentiate.


The 27-Year Difference: Framework vs. Methodology

While Hackett’s acknowledgment of readiness-first principles is exciting and validates decades of pattern documentation, there are meaningful differences between a 2025 framework and a methodology refined over 27 years.

Hackett Group provides the “what”: Six dimensions to assess
Hansen Fit Score provides the “how”: Quantitative measurement methodology with predictive validation

Let me be specific about the differences:

1. Foundation & Validation

Hackett Group (2025):

  • Based on current client observations and industry surveys
  • Published October 2025
  • Reflects contemporary AI adoption challenges
  • Case studies: McCormick, Micron (Innovation Award winners)

Hansen Fit Score (1998-2025):

  • Government-funded SR&ED research (1998)
  • RAM system: 51% → 97.3% delivery accuracy in 3 months
  • Virginia eVA (2007): “Effectiveness has little to do with technology”
  • 27-year longitudinal validation across technology waves
  • Pattern consistency from e-procurement → ERP → BI → AI
  • Pharmaceutical client: $55K engagement prevented multi-million failure
  • Retailer example: 23% premium despite “perfect alignment”

The Difference: Current observations vs. longitudinal scientific validation


2. Methodology: Qualitative vs. Quantitative

Hackett Group (2025):

  • Six pillars (Strategy & Leadership, Governance & Ethics, Organization/Culture/Talent, Technology Enablement, Data Management, AI Enablement)
  • Four readiness levels (Initiating → Adopting → Innovating → Pioneering)
  • Qualitative assessment: “Organizations at this level are open to learning…” / “These organizations use AI for incremental gains…”
  • General categorization: Helps identify where you fall on maturity spectrum

Hansen Fit Score (1998-2025):

  • Five dimensions (Behavioral Alignment, Process Maturity, Data Intelligence, Technology Architecture, Execution Capacity)
  • 23 measurable characteristics across dimensions
  • Quantitative scoring methodology: Numerical assessment of each characteristic
  • Hansen Fit Score output: 0-100 scale readiness score
  • Predictive success probability based on score ranges

The Difference: Framework for discussion vs. measurement tool for prediction


3. Assessment Approach

Hackett Group (2025):

“A comprehensive readiness assessment should cover each of the six pillars described… This will provide specific recommendations for actions you can take to prepare your procurement team to achieve new levels of performance advantage.”

Approach:

  • Evaluate each pillar
  • Identify readiness level
  • Receive general recommendations
  • Three essential steps: Assess, Explore, Understand impact on people

Hansen Fit Score (1998-2025):

  • Collaborative working sessions (not vendor-driven RFP process)
  • Root cause analysis across 23 characteristics
  • Strand commonality identification: Revealing invisible connections between seemingly disparate factors
  • Quantified baseline: “Here’s your current readiness score: 47/100”
  • Dimensional breakdown: Scores for each of five dimensions
  • Gap analysis: “Here are the 8 specific gaps that will cause failure”
  • Prioritized roadmap: “Here’s the 4-6 month sequence to close gaps before deployment”
  • Go/no-go recommendation: “You’re ready” vs. “Deploy now = 70% failure probability”

The Difference: General guidance vs. specific measurement + remediation roadmap


4. Predictive Capability

Hackett Group (2025):

  • Identifies four readiness levels
  • Describes characteristics of each level
  • Notes “few organizations” reach pioneering level
  • Suggests correlation between readiness and value
  • No quantified success prediction

Hansen Fit Score (1998-2025):

  • Quantified success probability based on readiness score:
    • HFS 0-40 (Without Phase 0): 20-40% success rate
    • HFS 41-70 (Partial readiness): 50-60% success rate
    • HFS 71-100 (Hansen Fit Score validated): 85-95% success rate
  • 27-year track record validating predictions
  • Pattern recognition: Same readiness gaps = same failure modes across technology waves
  • Predictive accuracy: Organizations scoring below 60 consistently fail; above 75 consistently succeed

The Difference: Descriptive categories vs. predictive probabilities with historical validation


5. Universality of Application

Hackett Group (2025):

  • AI-specific (focus on Gen-AI adoption in procurement)
  • Six pillars designed for AI readiness assessment
  • Procurement-centric application
  • Addresses current 2025 challenges

Hansen Fit Score (1998-2025):

  • Technology-agnostic (applies to any transformation initiative)
  • Successfully applied to:
    • E-procurement platforms (2000s)
    • ERP implementations (2000s-2010s)
    • Catalog-based systems (2010s)
    • Self-service BI platforms (2010s)
    • Digital transformation (2010s-2020s)
    • AI/Gen-AI deployments (2020s-present)
  • Pattern consistency: Same five dimensions apply regardless of technology
  • Universal validity: What changed was the technology label, not the readiness requirements

The Difference: AI-specific framework vs. universal transformation methodology


6. Measurement Precision

Hackett Group (2025): “Understanding your readiness to reimagine work can help you determine how to overcome stalled efforts, reset expectations and set a path toward performance improvement leveraging AI.”

Deliverable:

  • Framework understanding
  • Readiness level identification
  • Directional recommendations

Hansen Fit Score (1998-2025): “Here’s your readiness assessment with numerical precision across five dimensions and 23 characteristics. Your overall Hansen Fit Score is 58/100, indicating 40-50% success probability. Here are the eight critical gaps, prioritized remediation sequence, and estimated timeline to achieve 85%+ success probability.”

Deliverable:

  • Numerical readiness score (0-100 scale)
  • Dimension-by-dimension scores (5 dimensions)
  • Characteristic-level assessment (23 measurements)
  • Quantified gap analysis (what’s missing, severity scoring)
  • Prioritized remediation roadmap (sequence and timeline)
  • Success probability prediction (based on current vs. target score)
  • Go/no-go recommendation (deploy now vs. remediate first)

The Difference: Conceptual assessment vs. quantitative measurement with actionable roadmap

Hackett’s Most Revealing Statistic

The paper’s most telling data point appears on page 8:

“Fewer than 10% of procurement teams report transformative gains of 25% or greater in productivity, quality effectiveness, customer/employee experience and/or operating cost reduction today.”

Translation: 90% are achieving incremental results or failing entirely.

This statistic independently validates what Hansen Fit Score has documented for 27 years:

Organizations deploying without readiness assessment:

  • Achieve 10-40% of projected benefits
  • Experience 60-80% failure rates
  • Remain stuck in “incremental gains” regardless of technology sophistication

Organizations conducting Phase 0 assessment:

  • Achieve 85-95% of projected benefits
  • Report transformative gains (25%+ improvements)
  • Successfully scale from pilot to enterprise deployment

Hackett’s “fewer than 10%” achieving transformation validates the iceberg model:

  • 20% visible = Incremental technology gains (what most organizations achieve)
  • 80% hidden = Transformative potential (requires organizational readiness work to unlock)

The failure pattern isn’t new. What’s new is a major consulting firm quantifying it publicly.


7. Track Record & Validation

Hackett Group (2025):

  • Framework published October 2025
  • Based on 2025 Key Issues Study (64% expect AI transformation)
  • Success stories: McCormick (agricultural AI), Micron (40% ROI year one)
  • Industry statistics: 53% adoption, 2% exceed expectations

Hansen Fit Score (1998-2025):

  • 1998: RAM for Canada’s DND (51% → 97.3% delivery accuracy)
  • 2007: Virginia eVA validation (methodology over technology)
  • 2007: Predicted Mendocino/DUET failure (proved correct)
  • 2010s: Documented procurement platform disappointments
  • 2023: Retailer case (23% premium validates readiness gap cost)
  • 2025: Pharmaceutical client ($55K prevented $5M+ failure)
  • 27-year archive: Procurement Insights blog (2007-2025) with continuous pattern documentation

The Difference: Current industry snapshot vs. 27 years of longitudinal validation


The Critical Distinction: Diagnosis vs. Prediction

What Hackett’s Framework Provides

“You need to assess readiness across six dimensions before deploying AI. Here’s what those dimensions are, why they matter, and general characteristics of organizations at different maturity levels.”

Value:

  • ✅ Raises industry awareness of readiness importance
  • ✅ Educates market on critical dimensions
  • ✅ Validates readiness-first concept from authoritative source
  • ✅ Provides common language for discussing organizational maturity

Limitation:

  • Framework for understanding, not measurement tool
  • Qualitative assessment, not quantitative scoring
  • No predictive capability for success probability
  • General recommendations, not specific remediation roadmap

What Hansen Fit Score Provides

“Here’s your current readiness score: 47/100. Based on 27 years of pattern validation, you have a 30-40% success probability if you deploy now. Here are the 8 specific gaps causing this score, ranked by severity and interdependency. Here’s the 4-6 month roadmap to close those gaps and achieve 85-95% success probability. Then—and only then—should you select and deploy technology.”

Value:

  • ✅ Quantitative measurement across 23 characteristics
  • ✅ Numerical readiness score (comparable across organizations)
  • ✅ Predictive success probability (validated by 27-year track record)
  • ✅ Specific gap identification (not general categories)
  • ✅ Prioritized remediation sequence (actionable roadmap)
  • ✅ Go/no-go decision support (prevent predictable failures)

Application:

  • Phase 0 assessment before technology selection
  • Readiness validation before deployment
  • Success probability prediction
  • Investment decision support (proceed vs. remediate first)

Why The Industry Took 27 Years: Structural Barriers

The question isn’t “Why didn’t they listen to me?” The question is: “What structural barriers prevented evidence-based readiness assessment from becoming standard practice?”

1. Incentive Misalignment (Big Consulting)

Consulting firms get paid for:

  • Implementation hours (complexity = revenue)
  • Technology deployment (vendor partnerships)
  • Remediation work (fixing failed projects)

They don’t get paid for:

  • Preventing failures (reduces billable scope)
  • Saying “you’re not ready” (kills deals)
  • Phase 0 assessments (small engagement that eliminates larger ones)

Result: Tool-first thinking perpetuated despite evidence

2. Vendor Business Models

Technology vendors need:

  • Quarterly sales targets (pressure to close deals)
  • Implementation revenue (get product deployed)
  • Upgrade cycles (next version fixes current problems)

They resist:

  • Rigorous readiness screening (reduces qualified pipeline)
  • “Not ready” verdicts (kills near-term revenue)
  • Honest capability assessment (might reveal product-organization misfit)

Result: Capability marketing dominates readiness discussion

3. Executive Pressure

CEOs and Boards want:

  • Fast transformation (competitive urgency)
  • Visible action (“we’re deploying AI” = progress signal)
  • Benchmark comparisons (“competitor has this, we need it”)

They resist:

  • “Slow down for assessment” (perceived as analysis paralysis)
  • “We’re not ready yet” (uncomfortable admission)
  • Multi-year readiness roadmap (too patient for quarterly pressure)

Result: Deploy-first thinking despite failure statistics

4. Practitioner Isolation

Individual procurement leaders:

  • Read transformation research
  • Recognize failure patterns
  • Want readiness assessment

But they’re overruled by:

  • Executives believing vendor promises
  • Consultants with implementation bias
  • Technology selection committees (tool-first mandate)

Result: Knowledge exists but lacks authority to enforce

The Outcome

These structural barriers created a system where:

  • ✅ Evidence was available (RAM 1998, documented patterns)
  • ✅ Methodology was proven (97.3% delivery accuracy)
  • ✅ Practitioners recognized the pattern (blog readership)
  • ❌ Industry adoption stalled (misaligned incentives)
  • ❌ Failure rates persisted (60-80% across decades)
  • ❌ Organizations repeated mistakes (same pattern, different technology)

Until a major consulting firm validated the approach.


The Path Forward: Complementary, Not Competitive

How Hackett and Hansen Fit Together

Hackett Group’s Framework:

  • Raises awareness (industry education)
  • Provides common language (six pillars)
  • Validates readiness-first concept (authoritative voice)
  • Identifies what to assess (dimensional framework)

Hansen Fit Score Methodology:

  • Quantifies readiness (numerical scoring)
  • Predicts success probability (27-year validation)
  • Provides measurement tools (23 characteristics)
  • Delivers actionable roadmaps (gap remediation)

Together, they create complete solution:

  1. Hackett convinces executives readiness matters (credibility + reach)
  2. Hansen Fit Score measures actual readiness (quantitative assessment)
  3. Organizations make informed decisions (deploy vs. remediate first)
  4. Success rates improve from 20-40% to 85-95% (validated outcome)

The Opportunity

For the first time in 27 years, we have:

  • ✅ Major consulting firm validation (Hackett)
  • ✅ Proven measurement methodology (Hansen Fit Score)
  • ✅ Quantified success rates (85-95% vs. 20-40%)
  • ✅ Industry readiness (AI urgency creates openness)

The question: Will we make Phase 0 readiness assessment standard practice, or will we spend another 27 years repeating the same failures with the next technology wave?


Three Steps Organizations Should Take Now

1. Assess (Before Technology Selection)

Not: “Which AI platform should we select?”

But: “Are we organizationally ready to leverage AI effectively?”

How Hansen Fit Score Operationalizes Hackett’s Framework

Hackett provides the diagnostic language. Their six pillars give executives a common vocabulary for discussing readiness across the organization.

Hansen Fit Score provides the operational measurement. Here’s how the frameworks connect in practice:

The operational difference:

Hackett asks: “Have you assessed your readiness across these six pillars?”

Hansen Fit Score answers: “Your Strategy & Leadership pillar scores 2.1/5.0. Here are the five specific gaps: (1) Executive sponsorship insufficient for multi-year commitment, (2) Risk tolerance below threshold for AI uncertainty, (3) Resource allocation not committed beyond pilot phase, (4) Strategic clarity missing on AI’s role in competitive advantage, (5) Cross-functional alignment fragmented. Remediation priority: High. Timeline: 2-3 months. Target score: 3.8/5.0 minimum for 70% success probability.”

Hackett identifies dimensions. Hansen Fit Score measures them, quantifies gaps, prioritizes remediation, and predicts success probability.


Hansen Fit Score Assessment:

  • Quantitative measurement across five dimensions
  • 23-characteristic evaluation
  • Numerical readiness score (0-100)
  • Success probability prediction
  • Gap identification and prioritization

Timeline: 2-4 weeks for comprehensive Phase 0 assessment

Investment: Fraction of implementation cost, prevents millions in failure

2. Remediate (Close Readiness Gaps)

Based on assessment findings:

  • Behavioral Alignment gaps: Change management, stakeholder engagement, adoption capacity building
  • Process Maturity gaps: Documentation, standardization, governance frameworks
  • Data Intelligence gaps: Quality improvement, governance establishment, accessibility enhancement
  • Technology Architecture gaps: Integration planning, infrastructure readiness, system compatibility
  • Execution Capacity gaps: Resource allocation, project management capability, sustained support

Timeline: 4-8 months typical for readiness remediation

Benefit: Moves success probability from 30-40% to 85-95%

3. Deploy (With Validated Readiness)

Only after achieving readiness threshold:

  • Hansen Fit Score 70+ (minimum)
  • Critical gaps closed (prioritized sequence)
  • Organizational alignment confirmed (behavioral readiness)
  • Infrastructure prepared (technical readiness)
  • Success probability validated (85-95% range)

Then—and only then:

  • Select appropriate technology (fit to readiness level)
  • Deploy with confidence (validated capability)
  • Monitor against predictions (continuous validation)
  • Achieve projected benefits (85-95% success rate)

Applying This In Practice: A Real Scenario

Scenario: Mid-market organization wants to deploy AI-powered planning platform (similar to o9 Solutions, SAP IBP, or other enterprise AI).

The Hackett Approach

Assessment Process:

  • Evaluate organization across six pillars
  • Identify current readiness level: “Adopting”
  • Characteristics observed:
    • Using embedded AI in Microsoft/SAP for incremental gains
    • Lacking ideation skills to explore AI’s full potential
    • Missing executive alignment on AI strategy
    • No center of excellence established

Recommendations:

  • Improve maturity across all six pillars
  • Establish center of excellence to drive innovation
  • Upskill talent in technical, data, and business analysis
  • Promote continuous learning culture
  • Scale enabling technologies

Timeline: Unspecified (implied ongoing improvement)

Go/no-go decision: Implied “yes, proceed while improving” – no explicit threshold prevents deployment


The Hansen Fit Score Approach

Quantitative Assessment Results:

Overall Hansen Fit Score: 47/100

Dimensional Breakdown:

  • Behavioral Alignment: 42/100 (2.1/5.0 average)
  • Process Maturity: 44/100 (2.2/5.0 average)
  • Data Intelligence: 46/100 (2.3/5.0 average)
  • Technology Architecture: 48/100 (2.4/5.0 average)
  • Execution Capacity: 38/100 (1.9/5.0 average)

Readiness Level Translation:

  • Hackett category: “Adopting”
  • HFS probability band: 30-40% success rate
  • Risk assessment: HIGH FAILURE PROBABILITY

Eight Critical Gaps Identified (Prioritized by Severity):

  1. Executive sponsorship insufficient (Behavioral Alignment: 2.1/5)
    • Current: Single champion without C-suite mandate
    • Required: Multi-year executive commitment with allocated budget
    • Gap severity: Critical
    • Remediation: 2-3 months
  2. Change management capacity lacking (Behavioral Alignment: 1.8/5)
    • Current: No dedicated change resources
    • Required: Change management team with adoption tracking
    • Gap severity: Critical
    • Remediation: 3-4 months
  3. Data quality below AI-ready threshold (Data Intelligence: 2.3/5)
    • Current: 40% data accuracy, inconsistent governance
    • Required: 85%+ accuracy, documented governance frameworks
    • Gap severity: High
    • Remediation: 4-6 months
  4. Process documentation incomplete (Process Maturity: 2.0/5)
    • Current: Tribal knowledge, inconsistent workflows
    • Required: Documented, standardized processes
    • Gap severity: High
    • Remediation: 3-5 months
  5. Integration architecture unclear (Technology Architecture: 2.4/5)
    • Current: Point-to-point connections, technical debt
    • Required: Strategic integration roadmap, API framework
    • Gap severity: Medium-High
    • Remediation: 2-4 months
  6. Resource allocation not committed (Execution Capacity: 1.9/5)
    • Current: “We’ll figure it out during implementation”
    • Required: Dedicated resources identified and allocated
    • Gap severity: Critical
    • Remediation: 1-2 months
  7. Governance frameworks missing (Process Maturity: 2.2/5)
    • Current: Ad-hoc decision-making
    • Required: AI governance, ethical guidelines, decision rights
    • Gap severity: Medium-High
    • Remediation: 2-3 months
  8. Stakeholder alignment fragmented (Behavioral Alignment: 2.0/5)
    • Current: IT wants AI, Finance skeptical, Operations unaware
    • Required: Cross-functional alignment with shared goals
    • Gap severity: High
    • Remediation: 2-3 months

Hansen Fit Score Recommendation:

DO NOT PROCEED with AI platform deployment.

Rationale: Current HFS 47/100 indicates 30-40% success probability. Deploying now creates:

  • 60-70% probability of implementation failure
  • High risk of cost overrun (3-5x initial budget typical)
  • Low probability of achieving projected ROI
  • Organizational credibility damage if project fails

Remediation Roadmap (4-6 months):

Phase 1 (Months 1-2): Critical Foundations

  • Secure executive sponsorship and budget commitment
  • Establish dedicated change management resources
  • Identify and allocate implementation team resources
  • Begin stakeholder alignment process

Phase 2 (Months 2-4): Process & Data

  • Document and standardize core processes
  • Implement data quality improvement program
  • Establish governance frameworks
  • Develop integration architecture plan

Phase 3 (Months 4-6): Readiness Validation

  • Complete stakeholder alignment
  • Finalize data governance and quality metrics
  • Validate change management readiness
  • Re-assess Hansen Fit Score

Target HFS: 72-75/100

  • Success probability: 70-80%
  • Readiness level: “Innovating”
  • Recommendation: PROCEED with deployment

Then deploy with confidence, not hope.


Both approaches identify the problem: organization isn’t ready.

Only Hansen Fit Score quantifies:

  • How unready (HFS 47/100)
  • Why specifically (8 gaps with severity scores)
  • What to fix first (prioritized by impact)
  • How long it takes (4-6 months with milestones)
  • When to proceed (after HFS 70+, not before)
  • What probability of success (30-40% → 70-80%)

This is the difference between diagnostic framework and predictive methodology.


What Makes 2025 Different: The Convergence Moment

For 27 years, readiness-first assessment existed as practitioner knowledge without mainstream validation. RAM 1998 proved it worked (51% → 97.3% delivery accuracy). Virginia eVA demonstrated it in 2007 (methodology over technology). Hansen Fit Score quantified it continuously (2015-2025). But the consulting industry continued deploying technology first, asking readiness questions later—if at all.

The pattern was documented. The evidence was compelling. The methodology was available.

Yet structural barriers prevented adoption: consulting incentives favored implementation hours over prevention, vendor pressures demanded quarterly sales, executive impatience resisted “slow down for assessment” recommendations.

The Three Factors That Changed Everything

In October 2025, three factors converged for the first time:

1. Major Consulting Firm Validation

The Hackett Group—a respected, mainstream advisory firm—published research explicitly validating organizational readiness as the critical success factor. Their six-pillar framework independently arrives at the same dimensions Hansen Fit Score has measured since 1998.

Significance: This isn’t a lone practitioner saying “readiness matters.” This is an industry authority with global reach telling clients: “AI is failing (2% exceed expectations) because readiness is missing.”

2. Undeniable Failure Statistics

Hackett’s data removes any remaining doubt:

  • Only 2% of AI adopters exceed expectations
  • More than 50% believe AI has fallen short
  • Fewer than 10% achieve transformative gains (25%+)
  • 90% stuck in incremental results or failure
  • AI readiness rated lowest maturity among ALL 2025 initiatives

Significance: The 60-80% failure rate isn’t anecdotal anymore. It’s quantified, published, and attributed directly to readiness gaps by a major consulting firm.

3. Quantified Methodology Availability

Hansen Fit Score provides what Hackett’s framework doesn’t: measurement precision, predictive capability, and 27-year validation.

Significance: Organizations can no longer claim:

  • ❌ “Readiness assessment is unproven” (27-year track record + Hackett validation)
  • ❌ “Readiness can’t be measured quantitatively” (Hansen Fit Score does it with 23 characteristics)
  • ❌ “We don’t know what readiness threshold ensures success” (HFS 70+ = 70-80% probability, validated longitudinally)
  • ❌ “Readiness assessment isn’t necessary” (2% success rate without it, 85-95% with it)

Why This Moment Matters

Before October 2025:

  • Readiness-first: Practitioner knowledge, limited reach
  • Tool-first: Industry standard, backed by consulting firms and vendors
  • Failure rates: 60-80%, accepted as “normal”
  • Adoption barrier: “Everyone else deploys first, asks questions later”

After October 2025:

  • Readiness-first: Validated by major consulting firm + 27-year methodology
  • Tool-first: Exposed as cause of 2% success rate
  • Failure rates: 60-80%, now publicly attributed to readiness gaps
  • Adoption opportunity: Evidence, validation, and methodology now exist simultaneously

For 27 years, the industry could ignore readiness assessment because:

  • “No one else does it” (herd mentality)
  • “Consultants don’t recommend it” (incentive misalignment)
  • “We need to move fast” (executive pressure)

After Hackett’s October 2025 validation, none of these excuses remain valid.

The evidence is published. The methodology is available. The failure cost is quantified.

The only remaining variable is organizational will.

The Choice

The evidence, the validation, and the methodology now exist simultaneously for the first time in 27 years.

Organizations can either:

Continue the pattern:

  • Deploy technology hoping for transformation
  • Skip readiness assessment to “move fast”
  • Accept 60-80% failure rates as inevitable
  • Repeat the cycle with the next technology wave

Or adopt the evidence:

  • Assess readiness before technology selection
  • Remediate gaps proactively (4-6 months)
  • Achieve 85-95% success rates (validated)
  • Break the 27-year failure cycle permanently

This is the inflection point.

Not because the methodology is new (it’s 27 years old).

Not because the evidence is stronger (it’s been consistent since 1998).

Because a major consulting firm finally validated what the evidence has shown all along—and quantified the cost of ignoring it.

This is one of those “shots heard round the world” moments for organizational transformation.

For nearly three decades, readiness-first assessment was practitioner knowledge—documented, proven, but lacking mainstream endorsement. Now, The Hackett Group’s October 2025 research provides that endorsement, quantifies the failure cost of ignoring it, and removes every excuse organizations had for deploying technology without readiness validation.

The question is no longer “Does readiness assessment work?”

The question is: “Why would any organization skip it?”


The Question For The Industry

Will We Learn From 27 Years of Evidence?

Current Reality (2025):

  • 60-80% implementation failure rate persists
  • Only 2% of AI adopters exceed expectations (Hackett)
  • More than 50% believe AI has fallen short
  • $3.7 trillion wasted annually on failed technology
  • Industry keeps repeating the same pattern

Alternate Reality (If Readiness Assessment Becomes Standard):

  • Organizations assess before deploying
  • Readiness gaps identified and closed proactively
  • Success rates improve from 20-40% to 85-95%
  • Trillions saved in prevented failures
  • AI delivers promised transformation

The difference: Making Phase 0 organizational readiness assessment as standard as business case development.

The Evidence Is No Longer Debatable

1998: RAM achieves 97.3% delivery accuracy through readiness-first approach

2007: Virginia eVA succeeds because “effectiveness has little to do with technology”

2007: Mendocino/DUET fails because familiar interface ≠ organizational readiness

2010s: Procurement platforms disappoint at 60-80% failure rate

2023: Retailer pays 23% premium despite “perfect alignment”

2025: Hackett Group validates six dimensions of readiness

2025: o9 Solutions’ own case studies reveal success came from organizational work, not AI sophistication

The pattern has been consistent, documented, and independently validated.

The Choice

Option A: Continue deploying capability first, hoping for transformation, accepting 60-80% failure rates as “normal”

Option B: Adopt evidence-based readiness assessment, remediate gaps proactively, achieve 85-95% success rates

The methodology exists. The validation is complete. The only question is organizational will.


Conclusion: Getting It Right, Together

The exciting development isn’t “I was right 27 years ago.”

The exciting development is that major consulting firms are now validating what the evidence has consistently shown, creating an opportunity to finally make readiness assessment standard practice.

What I’ve Learned Over 27 Years

Technology has evolved dramatically:

  • 1998: Client-server e-procurement
  • 2007: SaaS platforms and Office integration
  • 2015: Cloud-based procurement suites
  • 2025: AI-powered planning and Gen-AI agents

But organizational readiness requirements haven’t changed:

  • Behavioral alignment still determines adoption
  • Process maturity still determines sustainability
  • Data intelligence still determines insight quality
  • Technology architecture still determines integration
  • Execution capacity still determines scale

The five dimensions of Hansen Fit Score apply regardless of technology buzzword.

The 27-Year Lead

The difference between Hackett’s 2025 framework and Hansen Fit Score isn’t superiority—it’s refinement through continuous validation.

What I learned from:

  • RAM 1998 (government-funded research)
  • Virginia eVA 2007 (methodology over technology)
  • Mendocino failure 2007 (interface ≠ readiness)
  • Procurement platform struggles (2010s pattern consistency)
  • Retailer premium 2023 (alignment ≠ readiness)
  • o9 Solutions 2025 (attribution problem)

…is that measurement matters.

Knowing you need “behavioral alignment” is valuable.

Measuring whether you have sufficient behavioral alignment to achieve 85% success probability is predictive.

The Path Forward

Hackett Group has:

  • Platform and reach (global consulting presence)
  • Industry credibility (authoritative validation)
  • Executive access (C-suite relationships)
  • Framework clarity (six well-articulated pillars)

Hansen Fit Score has:

  • Measurement methodology (quantitative assessment)
  • 27-year validation (longitudinal track record)
  • Predictive capability (success probability)
  • Proven results (85-95% success rate)

Together, we could finally:

  • Make Phase 0 assessment standard practice
  • Prevent the next 27 years of repeated failures
  • Achieve the 85-95% success rates the evidence promises
  • Transform “AI disappointment” into “AI delivery”

The Final Question

Will 2025 be the year the industry finally adopts what the evidence has shown for 27 years?

Or will we wait another decade—until the next technology wave arrives—to learn what we already know?

Technology doesn’t fail. Organizations without readiness do.

The evidence is no longer debatable.
The methodology is proven and available.
The validation is complete.

The question is: When will organizational readiness assessment become as standard as business case development?

Perhaps the answer—finally—is now.


About Hansen Fit Score

The Hansen Fit Score methodology evolved from RAM 1998 (government-funded research for Canada’s Department of National Defence) through 27 years of transformation pattern documentation. It provides quantitative organizational readiness assessment across five dimensions:

  1. Behavioral Alignment – Stakeholder readiness, adoption capacity, change management capability
  2. Process Maturity – Documentation, standardization, governance frameworks
  3. Data Intelligence – Quality, accessibility, governance, analytical capability
  4. Technology Architecture – Integration readiness, infrastructure capability, system compatibility
  5. Execution Capacity – Project management, resource allocation, sustained support

Organizations scoring 70+ on Hansen Fit Score achieve 85-95% transformation success rates.

Organizations scoring below 60 experience 60-80% failure rates.

The 27-year track record validates the predictive capability across technology waves from e-procurement to AI.


Resources

Hansen Fit Score Assessment: Quantitative organizational readiness measurement before technology deployment

RAM 2025 Methodology: Multi-model AI collaboration architecture for transformation validation

October Diaries: Conversational AI fluency methodology for practitioner-AI partnership

Procurement Insights Archive (2007-2025): 18 years documenting transformation patterns, implementation failures, and readiness-first validation

Hackett Group Research: “From Risk to Readiness: The Executive Playbook for Scaling AI in Procurement” (October 2025)



“With a focus on getting it right rather than being right, I’ve challenged the readiness-first thesis for 27 years. The Hackett Group’s 2025 validation suggests the industry is finally ready to adopt what the evidence has consistently shown: Technology capability is necessary but not sufficient. Organizational readiness determines transformation success.”

— Jon Hansen, November 2025

30

Posted in: Commentary