The Governance Gap: Why Technology Always Outpaces Readiness

Posted on February 1, 2026

0


From Time Crystals to Triple Bottom Lines — The Pattern That Explains Why Transformative Frameworks Struggle To Achieve Optimal Velocity

By Jon W. Hansen | Procurement Insights


Time Crystals and the Illusion of Arrival

This week, IBM, BasQ, and NIST researchers demonstrated something remarkable: two-dimensional time crystals across 144 qubits — quantum systems that maintain stable oscillation patterns even as energy continuously pumps into them. Unlike everything else in the universe, which eventually reaches thermal equilibrium, time crystals resist entropy. They persist.

For quantum computing enthusiasts, this is a milestone. IBM is now predicting “quantum advantage” — solving problems faster, cheaper, or more efficiently than classical computers alone — by the end of 2026. Fault-tolerant quantum systems by 2029.

The headlines write themselves: Breakthrough. Revolutionary. Game-changing.

But I’ve been watching technology “breakthroughs” for 42 years. And here’s what I’ve learned:

The breakthrough is never the hard part. The readiness is.


The Arc Every Technology Follows

Every transformative technology follows the same five-stage arc:

Stage 1: Capability — “We can do this!”

Stage 2: Hype — “This will change everything!”

Stage 3: Unintended Consequences — “This is causing harm.”

Stage 4: Governance Response — “We need readiness criteria.”

Stage 5: Mature Adoption — “Now we can scale responsibly.”

The gap between Stage 2 and Stage 4 — between hype and governance — is where failure lives. And it’s measured in decades.


The Evidence Is Historical

Consider how long it took for transformative technologies to develop the governance frameworks necessary for responsible deployment:

Aviation

  • 1903: Wright Brothers achieve powered flight
  • 1944: International Civil Aviation Organization (ICAO) established
  • Gap: 41 years

During those four decades, aviation went from curiosity to commerce to weapon of war. Pilots died. Passengers died. Air routes were uncoordinated. Safety was inconsistent. It took two World Wars and countless accidents before international governance emerged.

Nuclear Energy

  • 1942: First controlled nuclear chain reaction (Chicago Pile-1)
  • 1957: International Atomic Energy Agency (IAEA) established
  • Gap: 15 years

Even with the existential stakes of nuclear technology, it took 15 years to create international oversight. In between: Hiroshima, Nagasaki, and the beginning of an arms race that still shapes geopolitics.

Pharmaceuticals

  • 1900s: Modern pharmaceutical industry emerges
  • 1962: Kefauver-Harris Amendment requires proof of efficacy (U.S.)
  • Gap: 60+ years

For six decades, pharmaceutical companies could market drugs without proving they worked. It took the thalidomide tragedy — thousands of children born with birth defects — to force comprehensive governance.

The Internet

  • 1990s: Commercial internet emerges
  • 2020s: GDPR, CCPA, and still-evolving frameworks
  • Gap: 30+ years and counting

We’re still in Stage 3 for the internet. Misinformation, privacy violations, algorithmic manipulation, social media’s mental health effects — the consequences are clear. Comprehensive governance remains elusive.


Why Internet Governance Remains Incomplete

The internet presents a fundamentally different governance challenge than previous technologies, and understanding why illuminates what’s coming for quantum computing — and what’s been stuck for decades in enterprise technology.

Distributed architecture. There’s no single point of control. Aviation has airports. Nuclear has reactors. The internet has… everything.

Conflicting jurisdictions. GDPR in Europe, CCPA in California, nothing comprehensive federally in the U.S., and the Great Firewall in China. There’s no equivalent to ICAO or IAEA.

Speed of change. By the time governance frameworks form, the technology has moved. Social media governance frameworks designed around Facebook are obsolete by TikTok.

Economic incentives misaligned. The business model (attention capture, data extraction) directly conflicts with governance goals (privacy, mental health, truth).

This isn’t just a policy failure. It’s a structural one. And it reveals something deeper about why governance gaps persist.


The Elkington Parallel: When Frameworks Get Reduced

In 2018, John Elkington did something unprecedented: he issued a “product recall” for a management concept he had invented.

Twenty-five years earlier, Elkington had coined the term “Triple Bottom Line” — the idea that businesses should measure success not just by profit, but by their impact on people and planet. It was supposed to be, in his words, “a genetic code for system change — pushing toward the transformation of capitalism.”

By 2018, Elkington recognized that his framework had failed its original intent. In the Harvard Business Review, he wrote:

“The Triple Bottom Line has failed to bury the single bottom line paradigm. It was never supposed to be just an accounting system. It was originally intended as a genetic code, a triple helix of change for tomorrow’s capitalism.”

What happened? The Triple Bottom Line got reduced from a system-change framework to an accounting tool — a way to balance tradeoffs without actually transforming behavior. Companies could report on people/planet/profit while continuing to operate exactly as before.

The fork didn’t change the cannibal. It just made the consumption look more civilized.


The Same Pattern in Procurement Technology

I’ve spent 27 years documenting an identical reduction in enterprise technology.

The analyst ecosystem — Gartner’s Magic Quadrants, Forrester’s Waves, Spend Matters’ SolutionMaps — was supposed to help organizations make better technology decisions. These frameworks were meant to guide transformation, not just rank vendors.

What they became was a capability comparison system — measuring what vendors can do without evaluating whether buyers can absorb it.

Just as Elkington’s Triple Bottom Line got reduced from system-change framework to accounting metric, analyst reports got reduced from decision-support tools to feature comparisons.

The result? Twenty-seven years of procurement technology “breakthroughs” and a failure rate that hasn’t moved: 50-80%.

Not because the technology doesn’t work. The technology works fine.

Because the organizations deploying it weren’t ready.


Equation-Based vs. Agent-Based: The Core Distinction

In 2008, I wrote a white paper called “The Greening of Procurement,” examining why sustainability initiatives in procurement were failing despite widespread adoption of Triple Bottom Line language.

The core insight: we were treating sustainability as an equation-based problem when it was fundamentally an agent-based problem.

Equation-based thinking assumes that if you define the right metrics and measure the right outputs, behavior change follows automatically. It’s technology-first, capability-focused. It assumes implementation success follows capability selection.

This is what Gartner measures. This is what TBL-as-accounting-tool looks like. This is what most enterprise technology deployments assume.

Agent-based thinking recognizes that outcomes emerge from the interactions of multiple actors with different incentives, constraints, and readiness levels. It’s people-first, readiness-focused. It recognizes that human and organizational factors determine outcomes regardless of technology capability.

This is what I’ve measured since 1998. This is what Elkington intended Triple Bottom Line to enable. This is what’s missing from every governance gap.

The distinction explains why:

  • A “Leader” vendor can fail in an organization that isn’t ready
  • A sustainability framework can become an accounting exercise
  • Internet governance can’t keep pace with internet capability
  • Quantum computing will face the same challenge

Equation-based approaches measure capability. Agent-based approaches measure readiness. The governance gap exists because we build the former and neglect the latter.


The Cannibals With Forks Question

Elkington borrowed his book title from Polish poet Stanislaw Jerzy Lec: “Is it progress if a cannibal uses a fork?”

The question cuts to the heart of every governance gap:

Is it progress if a procurement organization buys a “Leader” solution without the readiness to implement it?

Is it progress if a company reports Triple Bottom Line metrics without transforming how it operates?

Is it progress if we achieve quantum advantage without frameworks to ensure responsible deployment?

The fork (technology capability) doesn’t change the fundamental behavior (organizational readiness). It just makes the failure look more sophisticated.

In my 2023 analysis of COP28, I examined why sustainability initiatives continue to struggle despite decades of Triple Bottom Line adoption. The answer was structural: we measure what’s easy to measure (emissions, vendor features, quantum gates) rather than what determines outcomes (stakeholder incentives, organizational readiness, behavioral change).

Sultan Al Jaber, the president of COP28, stated there was “no science” indicating fossil fuel phase-out was necessary. Critics attacked the statement as climate denial. But Elkington’s framework explains the deeper truth: one business’s gain is often another’s loss. Al Jaber’s country’s economy depends on fossil fuels. The incentive structure doesn’t align with the sustainability metrics.

This isn’t climate denial. It’s the predictable result of equation-based governance meeting agent-based reality.

The same dynamic plays out in procurement technology every day. Vendors have incentives to sell capability. Analysts have incentives to rank capability. Organizations have incentives to buy “Leaders.” No one has incentives to measure readiness — until the project fails.


The Quantum Test Case

Quantum computing is now entering Stage 2 (hype) with all the same equation-based, technology-first assumptions:

  • “If we build capable quantum systems, organizations will benefit.”
  • “If we measure quantum advantage, deployment will follow.”
  • “If we publish quantum-readiness reports, organizations will prepare.”

These are the same assumptions that have produced 50-80% failure rates in procurement technology for 27 years.

What “quantum advantage” actually means:

IBM defines it as solving problems “cheaper, faster, or more efficiently than classical computing alone” — with quantum serving as an accelerator for classical high-performance computing. This is the quantum-centric supercomputing model: QPUs, CPUs, and GPUs working together, each handling the mathematics to which they’re best suited.

By the end of 2026, expect quantum advantage in specific, bounded problems — likely optimization and simulation. Route optimization, inventory positioning, supplier allocation across complex networks. These are mathematically suited to quantum approaches.

What it doesn’t mean:

Quantum computing will not transform enterprise procurement by 2026. Organizations claiming otherwise are in Stage 2 (hype), not Stage 4 (governance).

The governance timeline for quantum:

Quantum sits somewhere between nuclear and internet in its governance challenges:

Like nuclear: Physical infrastructure requirements (cryogenic systems, specialized facilities) create natural control points. You can’t run a quantum computer in your garage.

Like internet: The applications of quantum computing will distribute rapidly once capability scales. Quantum-powered optimization algorithms could embed in enterprise software without organizations understanding what’s running underneath.

My estimate: 15-25 years to meaningful governance frameworks (2040-2050), if we learn from past patterns. But “meaningful” doesn’t mean complete — more like aviation-level (functional standards, international coordination) than pharmaceutical-level (rigorous pre-deployment testing).

CISA has already mandated federal procurement of quantum-resistant technology products, anticipating “Q-Day” — the moment quantum computers can break current encryption. This is governance responding to threat, not opportunity. It’s Stage 4 for cryptography specifically, while quantum computing broadly remains in Stage 2.

The question is whether we can learn from 27 years of procurement technology failures and 30+ years of incomplete internet governance to compress the gap.


Twenty-Seven Years in the Gap

I’ve spent my career in the governance gap — not by choice, but by necessity.

In 1998, I developed the Relational Acquisition Model (RAM) through SR&ED-funded research for the Government of Canada. It achieved 97.3% delivery accuracy for the Department of National Defence. The methodology wasn’t about technology capability — it was about organizational readiness.

That work led to a $12 million company sale in 2001. But more importantly, it established a pattern I’ve tracked ever since: the gap between what technology can do and what organizations can absorb is where failure lives.

Here’s what I’ve documented:

1998-2026: The Procurement Technology Gap

Every wave of procurement technology has followed the same arc:

  • Late 1990s: ERP procurement modules — “This will transform purchasing!”
  • 2000s: e-Procurement platforms — “This will digitize sourcing!”
  • 2010s: Source-to-Pay suites — “This will automate everything!”
  • 2020s: AI-powered procurement — “This will be intelligent!”

And through all of it, the implementation failure rate has remained stubbornly consistent: 50-80%.

The analyst industry publishes annual assessments of procurement technology vendors. Magic Quadrants. Waves. SolutionMaps. These reports evaluate what the technology can do and how vendors compare to each other.

What they don’t evaluate:

  • Whether the buying organization can absorb the technology
  • Whether the vendor can actually implement what they sell
  • Whether the gap between capability and readiness will doom the project

This is the governance gap in procurement technology. The industry has mature capability assessment. It has no readiness assessment. Twenty-seven years of breakthroughs, and the failure rate hasn’t moved.


The Hansen Readiness Arc

Whether the technology is quantum computing or procurement platforms, the governance gap follows predictable patterns. Closing it requires assessing three dimensions:

1. Technology Capability

What the technology can do under optimal conditions. This is what analysts measure. This is what vendors sell. This is Stage 1 and Stage 2.

2. Service Delivery Capacity

Can the provider actually implement what they’re selling? Do they have the people, processes, and track record to deliver? This is rarely measured.

3. Organizational Readiness

Can the buying organization absorb the technology? Do they have the data maturity, process alignment, change management capacity, and stakeholder buy-in to make it work? This is almost never measured.

The gap between these three dimensions is where failure lives.

A “Leader” in capability assessment can still fail if service delivery is weak or organizational readiness is low. This isn’t theoretical — it’s documented across hundreds of implementations I’ve tracked.

The Hansen Fit Score™ measures what the analyst reports don’t. The gaps between technology capability, service delivery capacity, and minimum client readiness required predict implementation outcomes more accurately than any Magic Quadrant positioning.


What Quantum Can Learn from Procurement (and Elkington)

IBM is building remarkable technology. The time crystal demonstration proves quantum systems can maintain coherence at scale. The hybrid quantum-classical architecture represents genuine innovation.

But IBM’s roadmap focuses on capability: qubits, gates, error correction, fault tolerance.

Where is the readiness roadmap?

  • Which organizations are prepared to adopt quantum systems?
  • What governance frameworks will ensure responsible deployment?
  • Who is assessing the gap between what quantum can do and what enterprises can absorb?

History suggests these frameworks will emerge 15-60 years after capability. The question is whether we can learn from the pattern and compress the timeline.

Elkington recalled his Triple Bottom Line framework because it had been reduced from system-change catalyst to accounting exercise. The framework measured the right things but didn’t transform behavior.

The same risk exists for every emerging technology governance effort:

  • Quantum readiness frameworks that measure capability without readiness
  • AI governance that counts parameters without assessing organizational absorption
  • Sustainability metrics that track emissions without addressing stakeholder incentives

The pattern repeats because we keep building equation-based solutions to agent-based problems.


The Work Ahead

I’m not a quantum computing expert. I’m a technology governance pattern recognizer with 42 years of evidence.

The pattern says: capability arrives first, governance follows decades later, and the gap between them is where harm accumulates.

For quantum computing, the capability is arriving now. The governance gap is opening. Organizations will be sold quantum-enhanced solutions long before readiness frameworks exist to evaluate their ability to deploy them responsibly.

For procurement technology, the capability has existed for decades. The governance gap has remained open for 27 years. The failure rate hasn’t moved because we measure capability without measuring readiness.

For sustainability, the Triple Bottom Line framework existed for 25 years before its creator called for a recall. The metrics existed but the transformation didn’t.

All three problems have the same solution: agent-based, readiness-first methodology.

Before asking “can this technology do what we need?” ask “can this organization absorb what the technology offers?”

Before measuring capability, measure readiness.

Before ranking vendors, assess the gap between what they sell and what buyers can implement.

The time crystals resist entropy. Organizations don’t. Neither do management frameworks, governance systems, or transformation initiatives — unless we build them differently.


The Hansen Method

For 27 years, I’ve worked in the governance gap. Not by evaluating technology capability — the analysts do that adequately — but by evaluating organizational readiness.

The Hansen Fit Score™ measures what the analyst reports don’t:

  • Technology Capability — what the vendor offers
  • Service Delivery Capacity — whether the vendor can implement it
  • Minimum Client Readiness Required — whether the organization can absorb it

The gaps between these three dimensions predict implementation outcomes more accurately than any Magic Quadrant positioning.

This methodology doesn’t compete with analyst frameworks. It complements them — providing the Phase 0 readiness assessment that determines whether capability rankings matter at all.

If your organization is evaluating procurement technology — or any enterprise technology — and you want to close the governance gap before it costs you millions, the assessment methodology exists.

The question is whether you’ll use it before or after the 50-80% failure rate claims another project.


A condensed executive briefing version of this post is available for leadership meetings and board presentations. Contact jon@pimedia1.com.


Jon Hansen is the founder of Hansen ModelsTM and creator of the Hansen MethodTM, a procurement transformation methodology developed over 27 years. He operates Procurement Insights, an 18-year archive documenting procurement technology patterns. His work on the governance gap between technology capability and organizational readiness began with SR&ED-funded research in 1998 and continues through the Hansen Fit Score™ framework and RAM 2025 multimodel AI validation system. TM


From the Procurement Insights Archives:

-30-

Posted in: Commentary