The Pattern That Repeats Every Decade: Why Technology Keeps Failing at 70-80% Rates

Posted on November 10, 2025

0


Every decade, the technology changes. The failure rate doesn’t.

  • 2001-2005: 75-85% of e-procurement initiatives failed to achieve promised results
  • 2007-2015: Cloud migrations and digital transformation struggled
  • 2015-2020: Big Data and analytics initiatives disappointed
  • 2025: AI governance failures hitting 70-80% rates

Same pattern. Different technology. Why?

A Maserati Without a Driver’s License

Recently, a colleague asked about Coupa’s challenges in the procurement technology market. My response: “It has never been about the technology.”

Think of it this way: A Maserati might be a fantastic car, but if you can’t master driving it, you’re going to crash. The same is true for AI, for procurement transformation, for digital initiatives of any kind.

The technology will work. The question is: are you ready?

What Twenty Years of Evidence Shows

In the early-to-mid 2000s, I wrote a paper titled “Technology’s Diminishing Role in an Emerging Process-Driven World.” The idea seemed counterintuitive—we were in the midst of the e-procurement revolution—but what I observed over subsequent years revealed a consistent pattern.

Between 2001 and 2005, industry research documented that 75 to 85% of all e-procurement initiatives failed to achieve the promised results in terms of savings and efficiency gains. Organizations invested millions in sophisticated platforms from leading vendors, yet the majority struggled to realize value.

The failures weren’t due to immature technology. The platforms worked exactly as designed.

The failures occurred because organizations deployed technology without first establishing the foundational readiness required to use it effectively.

The Virginia eVA Case: When Methodology Trumps Technology

In 2007, I documented a striking example that proved this principle in practice: Virginia’s eVA system.

The Commonwealth of Virginia’s web-based procurement solution was built on Ariba’s technology platform—the same technology that many other organizations had struggled to implement successfully. Yet Virginia achieved remarkable results while others using identical technology failed.

From my 2007 analysis:

“eVA as defined by the Commonwealth’s web site is a ‘web-based procurement solution that supports Virginia’s decentralized purchasing environment.’ Unfortunately, the word solution is an overused misnomer that usually refers to a particular type of technology. As you read the rest of this post, you will find that eVA and its growing success have very little to do with technology – in this case Ariba, and more to do with methodology.”

Same technology. Dramatically different outcomes. The difference wasn’t the software—it was the systematic approach to readiness and implementation.

The Three-Step Process: Technology Comes Last

Through research partly funded by the Government of Canada’s Scientific Research and Experimental Development (SR&ED) program, we developed and validated a systematic approach that consistently produced 75-85% success rates:

Step 1: Commodity Characteristic Analysis

Understanding your spend patterns, supplier relationships, and transaction behaviors before selecting technology. This analysis reveals what processes you actually need versus what vendors want to sell you.

Step 2: Process Alignment

Refining and optimizing your workflows to match organizational needs and stakeholder behaviors. This step addresses the human and organizational factors that determine whether technology will be adopted effectively.

Step 3: Technology Introduction

Deploying platforms after Steps 1 and 2 are successfully completed. The technology becomes the accelerator of proven processes rather than the definer of unproven ones.

The critical insight: Technology was introduced after the Commodity Characteristic Analysis and Process Alignment steps were successfully completed. This meant that the technology was the final step in the process.

Production-Validated Results

Following this three-step approach, a major public sector organization (Canada’s Department of National Defence) realized:

  • 23% cost of goods savings annually over several years
  • Reduction from 23 buyers to 3 FTEs managing the contract (87% efficiency gain)
  • Delivery accuracy improved from 51% to 97.3% (+46.3 percentage points)

These weren’t theoretical projections. These were production results achieved because readiness preceded deployment.

The Pattern Persists: 2025 AI Governance Crisis

Fast forward to November 2025. AI governance expert Luiza Jarovsky, whose newsletter reaches 85,000+ subscribers, is documenting critical failures in AI deployment:

  • Seven lawsuits against OpenAI over chatbot-assisted harm
  • Internal governance failures exposed through depositions
  • Regulatory fragmentation across jurisdictions
  • What she calls “usage policy theater” replacing systematic governance

The failure patterns she documents—manipulative autonomous systems, inadequate internal controls, unenforceable policies—mirror exactly what we observed in e-procurement failures two decades ago.

The technology changed. The failure pattern didn’t.

Six AI Models Confirm the Pattern

To test whether this pattern recognition was confirmation bias, I orchestrated six independent AI models (RAM MODELS 1 through 6) to analyze Jarovsky’s governance crisis analysis.

The convergence was remarkable: Five of six models independently identified that current governance approaches cannot prevent the failure patterns we’ve documented for decades. The models reached this conclusion without access to each other’s analyses.

Their consistent finding: Organizations deploying AI without readiness assessment are replicating the same 70-80% failure pattern observed across procurement transformation, ERP implementations, digital transformation initiatives, and now agentic AI deployments.

Why the Pattern Repeats

The pattern persists because human nature and organizational dynamics don’t change as fast as technology does.

Technology vendors sell features, functions, and benefits. These are tangible, exciting, and easier to demonstrate than organizational readiness. A glossy presentation showcasing AI capabilities is more compelling than a systematic assessment of governance infrastructure gaps.

Executives face pressure to adopt emerging technologies quickly. “AI strategy” sounds more impressive in board meetings than “readiness assessment.” The fear of being left behind drives premature deployment.

Procurement professionals are incentivized to evaluate technology based on capabilities and cost, not on organizational readiness. RFPs ask “what can your platform do?” not “are we ready to use what your platform does?”

Implementation consultants are typically engaged after technology selection, when their role becomes “making it work” rather than “determining if it should be deployed.”

The result: Organizations repeatedly deploy sophisticated technology into environments unprepared to use it effectively.

What Jim Collins Taught Us About Technology

In “Good to Great,” Jim Collins documented what he called the “Myth of Technology Driven Change.” His research showed that technology accelerates transformation but never causes it. Great companies used technology to accelerate momentum they had already built through disciplined people and disciplined thought.

Collins wrote this in 2001. Twenty-four years later, we’re still learning the same lesson.

Breaking the Pattern: The Readiness-First Framework

The evidence across two decades points to a clear alternative approach:

1. Assess Organizational Readiness FIRST

Before evaluating any technology platform, systematically measure:

  • Governance infrastructure: Decision-making authority, accountability frameworks, risk management
  • Data foundation: Quality, accessibility, integration, ownership
  • Stakeholder alignment: Executive sponsorship, user buy-in, change management capability
  • Technical infrastructure: Integration capabilities, security protocols, scalability

2. Build Missing Capabilities SECOND

Address the gaps revealed by assessment:

  • Establish governance frameworks before deploying autonomous systems
  • Clean and integrate data before expecting analytics value
  • Secure stakeholder alignment before forcing adoption
  • Build infrastructure before adding complexity

3. Deploy Technology THIRD

Select and implement platforms designed to accelerate the capabilities you’ve built, not to compensate for capabilities you lack.

This approach consistently produces 75-85% success rates because it prevents the failure modes that doom 70-80% of technology-first initiatives.

The Current Opportunity: AI Governance Infrastructure

The AI governance crisis Jarovsky documents represents not just a challenge but an inflection point.

For twenty years, the readiness-first approach was dismissed as too slow, too cautious, too process-heavy for the fast-moving technology landscape. Organizations wanted to “move fast and break things.”

Now we can see what breaks: AI systems causing documented harm, governance failures requiring regulatory intervention, deployment patterns replicating failure rates we’ve observed for decades.

The market is increasingly recognizing what the evidence has shown: systematic readiness assessment prevents predictable failures.

The organizations that recognize this pattern—and implement readiness frameworks before deploying AI—will be the ones that achieve the 75-85% success rates rather than joining the 70-80% failure statistics.

What You Can Do Now

If your organization is considering AI deployment, ask these questions:

  1. Have we systematically assessed our readiness? Not “do we have a strategy?” but “have we measured our capabilities against success criteria?”
  2. Do we have governance infrastructure in place? Not “have we written policies?” but “do we have enforceable mechanisms with accountability?”
  3. Is our data foundation solid? Not “do we have data?” but “is it accessible, integrated, clean, and governed?”
  4. Are our stakeholders aligned? Not “did we announce the initiative?” but “do users understand, buy in, and have capability?”
  5. What evidence of readiness do we have? Not “do we believe we’re ready?” but “what objective measures confirm readiness?”

If you can’t answer these questions with specific evidence, you’re repeating the pattern that has produced 70-80% failure rates for two decades.

The Bottom Line

It has never been about the technology.

Not in 2001 when e-procurement revolutionized supply chains. Not in 2007 when cloud computing transformed infrastructure. Not in 2015 when Big Data promised analytics breakthroughs. Not in 2025 when AI offers autonomous capabilities.

The technology works. The technology has always worked.

The question—the only question that matters—is whether you’re ready.

The pattern is clear. The evidence is overwhelming. The choice is yours.


About the Author: Jon W. Hansen is the founder of Hansen Models and creator of the Hansen Fit Score (HFS) framework for organizational readiness assessment. His research on procurement transformation, partly funded by Canada’s SR&ED program, has documented patterns across 180+ implementations over 27 years. His work on multi-model AI orchestration methodology is documented in “The October Diaries: How to Communicate and Collaborate With Generative AI Effectively.”

Learn More:


This analysis demonstrates why organizations must assess readiness before deploying technology—whether procurement platforms in 2005 or agentic AI systems in 2025. The pattern repeats because we keep making the same mistake.

Posted in: Commentary