The Compounding Technology Shadow Wave™ Trilogy: Executive Summaries

Posted on May 9, 2026

0


A consolidated overview of the three-part diagnosis that names the structural condition AI initiatives are inheriting — and the financial cost of leaving it unaddressed.

Over the past several days, the Procurement Insights archive has published three connected pieces that together produce a coherent diagnostic and financial framework for understanding why most current AI initiatives are underperforming. The pieces build on each other structurally. Each one is substantial enough to read on its own, but the combined diagnosis is materially stronger than any of the three pieces in isolation.

This post is the executive map. It compresses each of the three published posts into a short summary that preserves the load-bearing findings, with direct links to the underlying pieces for readers who want to engage substantively. Senior buyers, board members, and procurement leaders who want the framework’s executive overview before investing time in the full reading should start here.

The trilogy is being followed by additional pieces in the publication sequence — the Hansen Optionality Loss Estimate™ methodology, the convergence-event temporal mechanics, the absorption-mechanism failure analysis, and the consolidated white paper. The current three pieces establish the foundation those subsequent pieces build on.

Post 1: Marijn’s Data Is Wave Four

Marijn’s Data Is Wave Four: Why the AI Adoption Gap Was Always Going to Look Like This (May 8, 2026)

The finding. Marijn Overvest’s May 2026 survey of 121 procurement leaders surfaced three numbers worth taking seriously: 47% using AI daily, 8% in organizations that have formally embedded it, 83% operating without an enforced AI policy. The numbers are striking. They are also not new. This is the fourth time the same shadow adoption pattern has played out in the last thirty-five years.

The pattern. Spreadsheets in the 1990s. BYOD from 2007. SaaS sprawl from 2012. AI from 2022. Each wave produced bottom-up adoption that outran sanctioned governance by years. Each wave was diagnosed at the time as a discrete problem with the technology in question. None of the four waves have ended — each is still operating in the enterprise today, at near-saturation levels.

The post grounds this pattern in the contemporaneous Procurement Insights archive: the 2008 SAP procurement failures piece, the 2010 SaaS sprawl analysis, the December 2022 Forrester/Airtable data on app proliferation, the May 2025 spreadsheet decline analysis, the May 2026 IKEA-box piece on bolt-on architecture. Nineteen years of contemporaneous documentation showing the recurring shadow adoption pattern as it formed in real time, before any of the diagnostic frameworks for naming it existed.

The structural finding. Procurement teams are now being implicated in their own substrate sprawl — buying tools that contributed to the very fragmentation the AI wave is now exposing. The pattern is not new. What is new is the cadence at which the fourth wave’s consequences are arriving.

Why it matters. The Wave Four post establishes the empirical floor for everything that follows. Without the four-wave pattern named explicitly and grounded in archive evidence, the subsequent structural findings would be defensible but harder to anchor. The Marijn data is the recent empirical anchor; the archive is the longitudinal one. Together they ground the framework in evidence that no adjacent researcher can replicate without nineteen years of contemporaneous publishing across the same period.

Post 2: What Is The Compounding Technology Shadow Wave™?

What Is The Compounding Technology Shadow Wave™? (May 8, 2026)

The finding. The four waves are not operating as discrete technology categories with their own discrete governance problems. They are operating simultaneously, at saturation, with AI now running on top of the cumulative stack. AI is the first wave structurally capable of learning from, interacting with, and accelerating all the previous waves at machine cadence. The structural condition that emerges is what the framework names as the Compounding Technology Shadow Wave™.

The compounding works through three specific mechanisms. Input contamination — AI deployments draw inputs from data that lives in spreadsheets, flows through BYOD-era access patterns, and routes through SaaS applications procurement never sanctioned. Wave-4-quality decisions made against Wave-1-quality inputs are not Wave-4 outcomes. Provenance breakage — AI outputs that need to be defended to auditors, regulators, or the board cannot be defended when their inputs traversed unprovenanced shadow infrastructure. The chain breaks at whichever upstream wave’s shadow the inputs traversed. Reversion path reactivation — when AI deployments stall, users revert to the wave they were operating in before. Each reversion reactivates a different prior-wave shadow that had been partially dormant.

These three mechanisms are multiplicative, not additive.

The resolution architecture. The post introduces the framework’s two-tier resolution structure. Phase 0™ is the diagnostic that surfaces the multi-wave shadow stack across all four waves in a single readiness profile. The relational governance layer (positioned with SRS RBM® as the integrated relational instrument) provides the contractual frame within which the four operational disciplines — workflow rationalization, perimeter governance, SaaS rationalization, AI usage policy — do their wave-specific work. The execution layer is where humans, and increasingly agentic AI under delegated authority, operationalize the governance frame.

Why it matters. The Compounding Technology Shadow Wave™ post is the structural diagnosis. It names the condition the prior post documented as a pattern, and it positions the framework’s resolution architecture against the structural reality that single-wave governance does not address. Stephen Ashcroft’s contribution to the comment thread — governance only works when it runs at delivery pace — reiterated the two-condition test the framework now uses to evaluate any governance posture: governance must run at delivery pace AND operate against a recognized substrate. Both conditions must be met.

Post 3: Is The Failure Of Your AI Initiative Already Decided By What You Did In 2016?

Is The Failure Of Your AI Initiative Already Decided By What You Did In 2016? (May 9, 2026)

The finding. By 2016, three of the four technology waves were operationally significant in most enterprises. Most current AI initiatives are landing on substrate decisions that were finalized in the 2014 to 2018 window, calibrated for a pre-AI environment that no longer exists. The architecture decisions, contract terms, access patterns, data hygiene practices, and governance instruments that are now constraining AI deployment outcomes were made before AI was a factor in any of them. That is what makes the AI failure rate structurally predetermined for most enterprises.

The analytical contribution. The post introduces the Hansen Deflator Formula™ — the multiplicative substrate-quality measurement that converts the framework’s structural diagnosis into board-level financial vocabulary:

Realized AI value = (Theoretical AI value) × (Wave 1 substrate quality) × (Wave 2 substrate quality) × (Wave 3 substrate quality)

For the typical mid-market enterprise, illustrative coefficients (0.80 × 0.70 × 0.65) produce a deflator of 0.36. An AI deployment with $10 million in theoretical value is realizing $3.6 million; the remaining $6.4 million is being absorbed by the compounding shadow. For a five-thousand-employee enterprise spending $13 million per year on AI initiatives, AI value being absorbed runs $7 million to $9 million per year on the AI line alone. Total enterprise drag from the unresolved compounding stack runs $25 million to $40 million per year for a mid-cap enterprise.

The post also names a forthcoming companion measurement instrument — the Hansen Optionality Loss Estimate™ — for the foreclosed strategic optionality that linear cost calculations do not capture. And it surfaces a separate cost category in non-linear lost opportunity events: discrete strategic foreclosures that occurred at specific moments because the substrate condition prevented the enterprise from acting when the window was open. These events are typically much larger in magnitude than the linear cost categories, and they are structurally unrecoverable.

The board-level question. What is the cumulative shadow exposure our AI deployment will inherit, what is the realized-value drag the compounding wave is currently producing, and which combination of disciplines is going to address each layer of it?

Why it matters. This piece converts the framework from structural commentary into capital allocation instrument. The deflator math is auditable. The dollar figures are recoverable from existing financial systems. The recovery trajectory is calculable — substrate diagnosis from 2026 forward can move the deflator from 0.36 to 0.55 within nine months through measurable coefficient improvements. The board’s question becomes operational: surface the cost in the current capital allocation cycle, or absorb it into another year of overstated risk-adjusted EBITDA.

How the three pieces fit together

The structural sequence is intentional and worth being explicit about.

The Wave Four post establishes the pattern and grounds it in archive evidence. It answers: what has been happening, and how do we know?

The Compounding Technology Shadow Wave™ post establishes the structural diagnosis of the pattern. It answers: what is the integrating condition that connects the four waves, and what addresses it?

The deflator post establishes the financial measurement. It answers: what does the structural condition cost, and what changes the math?

Together they produce a coherent diagnostic sequence that moves from empirical pattern → structural diagnosis → financial measurement → operational lever. Each piece is substantively complete on its own. The combined diagnosis is materially stronger than any single piece, because each one earns the next one’s reception by establishing the structural foundation it builds on.

The framework now has named structural vocabulary (Compounding Technology Shadow Wave™), a named diagnostic instrument (Phase 0™), a named relational governance positioning (SRS RBM® as the integrated relational layer), a named measurement instrument (Hansen Deflator Formula™), and a named forthcoming measurement instrument (Hansen Optionality Loss Estimate™). The vocabulary architecture is what distinguishes this body of work from generic AI governance commentary, which has none of these instruments named or measurable.

The publication sequence continues. The Hansen Optionality Loss Estimate™ piece develops the methodology for the optionality measurement. A subsequent piece develops the non-linear lost opportunity analysis. The convergence-event piece develops the temporal mechanics underneath the structural condition. The white paper consolidates the body of work into citable form for board-level use.

For now: three pieces, one framework, one measurement instrument established, one forthcoming. The capital allocation question is on the page.

—30—


Compounding Technology Shadow Wave™ · Hansen Deflator Formula™ · Hansen Optionality Loss Estimate™ · Phase 0™ · Hansen Fit Score™ (HFS™) · RAM 2025™ · Real-World Condition Substrate™ · Strand Commonality™ · ARA™ · Implementation Physics™ Hansen Models™ · Founder: Jon W. Hansen · hansenprocurement.com

Posted in: Commentary