Why a Dollar Invested in Phase 0™ Outperforms a Dollar Invested Anywhere Else in Your AI Strategy

Posted on April 7, 2026

0


In 2025, global enterprises invested $684 billion in AI initiatives. By year-end, more than $547 billion of that investment had failed to deliver intended business value.

That is not a technology problem. The technology worked.

It is a pre-commitment problem — and the data now makes that distinction impossible to ignore.

The Failure Cost the Industry Is Not Measuring

The consulting market is under real pressure in 2026. The Big Four are posting modest single-digit growth after years of double-digit pandemic-era expansion. McKinsey has cut from 45,000 to 40,000 staff and is planning 4,000 more cuts. The reason is structural: clients are no longer willing to pay for advisory work that cannot demonstrate a measurable connection between the engagement and the outcome.

That pressure is exactly right — and the data shows where the measurement gap actually lives.

According to a 2026 synthesis of RAND Corporation, MIT Sloan, McKinsey, Deloitte, and Gartner data tracking 2,400+ enterprise AI initiatives:

  • Abandoned AI projects cost an average of $4.2M per initiative
  • Completed-but-failed projects cost $6.8M while delivering only $1.9M in value — an ROI of -72%
  • Large enterprises lost an average of $7.2M per failed initiative in 2025
  • 42% of organizations abandoned at least one AI initiative in 2025, up from 17% in 2024
  • 95% of GenAI pilots failed to scale (MIT, 2025)

The total addressable waste in 2025 alone: $547 billion.

What Actually Determines Whether a Project Succeeds

This is where the data becomes strategically important — because the failure driver is documented, not speculative.

The same 2026 research synthesis reveals the success rate differential when pre-commitment conditions are in place:

ConditionSuccess Rate WITHSuccess Rate WITHOUT
Clear pre-approval metrics54%12%
Formal data readiness assessment47%14%
Sustained C-suite sponsorship68%11%
Treating AI as transformation, not IT61%18%

Every one of those conditions is a Phase 0™ diagnostic variable. Every one is assessed before the commitment is made — not during implementation, not after go-live.

The 27-year Procurement Insights archive established this pattern independently in 1998, before any of these studies existed. The DND proof case moved delivery performance from 51% to 97.3% in 90 days — not by deploying better technology, but by understanding the real operating system the technology had to function within before a single line of code was written.

What Gartner, McKinsey, and the Big Four Are Selling — and What They Are Not

The incumbent advisory ecosystem measures vendor capability. It does not measure whether your organization can successfully absorb that capability. Gartner’s Magic Quadrant tells you what the platform can do. McKinsey’s transformation framework tells you how to run the implementation. Neither tells you whether the pre-commitment conditions for success exist before you sign the contract.

That structural gap has persisted across seven technology cycles. It is why the failure rate has held at 55-80% since the late 1990s — not because the technology has failed to improve, but because the diagnostic that precedes the technology decision has never been built into the advisory model.

Phase 0™ is that diagnostic. It operates in the one window where outcomes are still changeable: before the commitment is made.

The ROI Comparison

The numbers are not theoretical. They are drawn from documented outcomes.

Industry average without Phase 0™:

  • Average AI initiative investment: $1M–$7M+
  • Failure rate: 80%+
  • Average cost of a failed initiative: $4.2M–$7.2M
  • ROI on failed projects: -72%

With Phase 0™ pre-commitment conditions in place:

  • Projects with formal readiness assessment: 47% success rate (versus 14% without)
  • Projects with clear pre-approval metrics: 54% success rate (versus 12% without)
  • DND documented outcome: 97.3% delivery accuracy (from 51% baseline, sustained 7 years)
  • Virginia eVA documented outcome: $338M in savings over 24 years

The Phase 0™ diagnostic investment:

  • 30-minute readiness conversation: complimentary
  • Full Phase 0™ diagnostic: available at hpt@hansenprocurement.com

The question every C-Suite leader approving an AI initiative in 2026 should be asking is not which platform to select. It is whether the pre-commitment conditions for success have been assessed before the selection is made.

A $4.2M failed initiative is not a technology problem. It is a readiness problem that Phase 0™ was designed to prevent.

Which Sectors Are Most Exposed — and Where Phase 0™ Delivers the Greatest Return

The failure data does not distribute evenly across industries. It concentrates where organizational conditions are most complex and least mapped before deployment begins.

The 2026 sector breakdown from Pertama Partners, synthesizing RAND, MIT Sloan, McKinsey, Deloitte, and Gartner data across 2,400+ enterprise AI initiatives, is unambiguous:

  • Financial Services: 82.1% failure rate — average failed project cost $11.3M. Regulatory complexity, bias risk, and decision ownership gaps across AI-driven compliance and trading systems are the primary drivers. Every one of those is a pre-commitment diagnostic variable.
  • Healthcare and Life Sciences: 78.9% failure rate — clinical validation resistance, EHR integration complexity, and physician behavioral adoption patterns that autonomous systems cannot map before deployment begins.
  • Manufacturing and Supply Chain: 76.4% failure rate — OT/IT integration gaps and IoT data quality issues create hidden variables that the 1998 DND proof case identified and resolved. The archive documented this problem 27 years before it became an enterprise AI crisis.
  • Public Sector and Defence: ~75% failure rate — siloed departmental processes, procurement fragmentation, and incentive misalignment. The DND case remains the only documented instance of this problem being solved systematically before deployment rather than after.
  • Retail and E-Commerce: 73.8% failure rate — demand volatility and supply chain complexity require behavioral pattern mapping that autonomous systems cannot self-generate.
  • Professional Services: 68.7% failure rate — knowledge worker resistance and ROI complexity. It is worth noting that the consulting sector’s own AI failure rate is the most ironic data point in this entire analysis. The firms being paid to solve this problem are failing at it at roughly the same rate as their clients.

The common thread across every sector: 77% of failures are organizational, not technical. And every sector’s primary failure driver maps directly to a Phase 0™ pre-commitment diagnostic variable.

The diagnostic question does not change across sectors. What changes is the window available to act on the answer — and in every sector represented above, that window is before the commitment is made.


The Case Is No Longer Theoretical

Two posts published this week make the argument in plain sight.

When MIT, BCG, McKinsey, and a 27-year archive independently arrive at the same structural finding without coordination, that is not a coincidence. It is a convergence — and it is documented: → When MIT, BCG, McKinsey, and a 27-Year Archive Arrive at the Same Finding Without Coordination, That Is Not a Coincidence

When the former Director of the Stanford HAI AI Index — whose career spans the World Bank, the IMF, the Bank for International Settlements, the Brookings Institution, and the OECD’s Network of Experts on AI — reaches out to you, not the other way around, and says within twenty minutes “you have a better idea of what I’m doing than I do,” that is not a marketing claim. It is a documented, timestamped, on-record validation: → What a Former Stanford HAI AI Index Leader Had to Say About Hansen’s Models

Here is what those two posts represent in the context of this ROI argument.

The Hansen Models™ team has been living this reality for 43 years. We successfully implemented it 27 years ago at the Department of National Defence — moving delivery performance from 51% to 97.3% in 90 days, sustained for seven consecutive years, before any of today’s AI platforms existed. We have been meticulously covering, assessing, and documenting it for 18 years across 3,300+ independently produced archive entries, with zero vendor sponsorships and zero paid analyst relationships.

We would put our team’s experience, expertise, and documented track record up against anyone else in this industry.

The incumbent advisory ecosystem has had decades and billions of dollars to solve the pre-commitment problem. The failure rate has held at 55-80% across every technology cycle. The excuses — complexity, pace of change, organizational resistance — have remained consistent. So have the failures.

The time for excuses is over. The time for results is now.

The Advisory Model Is Blockbuster. Hansen Models™ Is Netflix. Here Is the Data.

There is a reason the incumbent firms are cutting headcount while simultaneously trying to justify their fees. The business model that built their infrastructure is the same business model their clients are no longer willing to sustain.

Blockbuster had more stores, more staff, and more brand recognition than Netflix in 2004. It also had a cost structure built around an infrastructure that customers were about to stop paying for. The parallel is not rhetorical. It is structural.

Consider what has happened to the firms that have spent decades telling organizations how to manage technology transformation:

McKinsey built from 17,000 employees in 2012 to 45,000 by 2022 — then cut to 40,000, with 4,000 more planned. The biggest headcount loss in the firm’s history. McKinsey’s own 2026 AI survey reports that 73% of AI investments fail to deliver ROI.

Deloitte cut 800 consulting roles in a single UK round in September 2024, with further cuts in October and 2025. The firm cited sluggish demand for large-scale consultancy projects.

PwC eliminated 3,300 roles between September 2024 and May 2025 — the first major reduction since 2009. KPMG cut 330 US audit positions in November 2024 due to what it called “unusually low voluntary turnover.”

The Hackett Group — whose headcount peaked at 1,685 — is now at 1,503. A 10.8% decline in 18 months, documented in SEC filings.

Gartner cut its annual revenue forecast by $80-100M in August 2025. Net income fell 42%. Free cash flow dropped 15%. They are divesting their Digital Markets business.

This is not a market correction. It is a structural realignment. Clients are no longer paying for intelligence. They are paying to support infrastructures — thousands of junior analysts whose work is being automated, global office networks, conference venues at $5,000-$8,000 per pass, and pyramid billing models that require junior hours to subsidize senior time.

Hansen Models™ is a different model entirely. Designed for outcome, not scale. Every dollar goes to diagnostic precision — not infrastructure maintenance. This is not a cheaper version of what the incumbent firms offer. It is a different category — the pre-commitment diagnostic that the incumbent advisory model was never designed to provide and cannot provide without dismantling the commercial architecture that funds it.

The incumbent firms were not designed to prevent failure. They were designed to inform decisions after the commitment architecture is already in motion. Phase 0™ is the only instrument that operates in the window before that motion begins — when the outcome is still changeable.

The Hansen Models™ advisory team brings what no incumbent firm can manufacture — former CPOs, Fortune 500 senior procurement and supply chain leaders, and practitioners who have sat in the seat, made the decisions, and lived with the outcomes. This is not analysis from the outside looking in.

Meet the Senior Members of the Hansen Models™ Advisory Team

Blockbuster did not fail because it lacked scale. It failed because its model was built for a reality that no longer existed.

The same pattern is now visible in advisory. More analysts. More offices. More infrastructure. And the same 55–80% failure rate across seven technology cycles.

This is not a temporary correction. It is a structural shift.

The question is no longer which firm you hire. The question is whether you are operating with a model that can produce a successful outcome at all.

The time for interpretation is over. The time for results is now.


The Procurement Insights archive contains 3,300+ independently produced documents spanning 18 years. Zero vendor sponsorships. Zero paid analyst relationships. RAM 2025™ multimodel validation confirms findings across five independent models.

Hansen Models™ is not asking for a seat at the table. The archive built the table. The work validated it. The market is now arriving at the same address.

If your organization is navigating an AI commitment in 2026 and you want the diagnostic that the incumbent advisory ecosystem was never designed to provide — the conversation starts here:

Book a 30-Minute Readiness Conversation

-30-

Posted in: Commentary