The Compounding Technology Shadow Wave™, the deflator math, and the financial argument for the CFO and the board.”
“Governance only works when it runs at delivery pace.” — Stephen Ashcroft, Executive Procurement Leader, May 2026
The question in the headline is the structural one this piece exists to answer. The answer, for most enterprises currently funding AI initiatives, is yes — the failure is already substantially decided, and the deciding decisions were made in the years before AI was on the board’s agenda. The structural reason is what the Compounding Technology Shadow Wave™ framework names. The financial cost is what this piece quantifies.
By 2016, three of the four technology waves were operationally significant in most enterprises. Spreadsheets had been load-bearing for twenty-six years. BYOD was nine years into adoption and approaching saturation. SaaS sprawl was four years into its trajectory and was already producing the integration patterns that would later be diagnosed as the SaaS rationalization problem. Most current AI initiatives are landing on substrate decisions that were finalized roughly in the 2014 to 2018 window. The architecture decisions, the contract terms, the access patterns, the data hygiene practices, and the governance instruments that are now constraining AI deployment outcomes were calibrated for a pre-AI environment that no longer exists.
That is what makes the AI failure rate structurally predetermined for most enterprises. Not the AI itself. The substrate the AI is being deployed against, which was built when AI was not yet a factor in any of the relevant decisions.
This piece converts that structural diagnosis into the financial language the CFO and the board operate in.
The conversion is necessary because the diagnostic argument, as structurally accurate as it is, does not on its own commission action. Boards act on dollars, on risk-adjusted EBITDA, on audit findings, on capital allocation defensibility. The argument that the enterprise has a multi-wave substrate condition is true, but the structural truth does not move the engagement until it is expressed in the financial vocabulary the board uses to allocate capital.
What follows is that conversion. There is a formula at the center of it that is novel enough to be quotable, defensible enough to be auditable, and large enough at the dollar-figure level to be unignorable at the board level.
The two-condition test that frames the financial argument
Stephen Ashcroft’s contribution to the framework — surfaced in the comment thread on the Compounding Technology Shadow Wave™ post — is the load-bearing test for whether any enterprise governance posture is actually working. Governance only works when it runs at delivery pace. To which the framework adds: governance only works when it operates against a recognized substrate. Both conditions must be met. Neither is sufficient alone.
A governance instrument that runs at delivery pace against an unrecognized substrate is enforcing rules against an assumed reality that does not match operational reality. Compliance reports look clean; the operational activity is unconstrained. A governance instrument that recognizes the substrate but runs on quarterly cadence is identifying conditions correctly but cannot constrain them at the speed they manifest. Compliance reports identify exposure; remediation arrives too late to matter.
Most current AI governance configurations fail at least one of these tests. AI governance platforms tend to operate at delivery pace but treat the substrate as AI-only, missing the four-wave compounding. Substrate-aware programs (shadow IT remediation, SaaS rationalization) tend to operate on procurement-cycle cadence, recognizing exposure correctly but failing to constrain it operationally. The configurations that satisfy both conditions are structurally rare, which is why the financial cost of the substrate condition is large enough to warrant board-level attention.
That financial cost is what this piece quantifies.
The direct cost components — already on the books
Start with what is already absorbed into existing financial statements. Each of the four waves produces measurable direct costs that sit in the financial systems today, recoverable without new methodology.
Wave 3 SaaS waste is the cleanest single number. Industry benchmarks put unused or underutilized SaaS licenses at twenty-five to forty-four percent of total SaaS spend. With average per-employee SaaS spend at $9,643 in 2026, a five-thousand-employee enterprise carries between $12 million and $21 million per year in directly identifiable SaaS waste. Recoverable from the procurement spend file in an afternoon.
Wave 1 spreadsheet errors, Wave 2 BYOD-era breach exposure, and Wave 4 AI implementation failure costs are similarly documented across published research. Forrester and Ventana put material spreadsheet errors in eighty-eight percent of business spreadsheets, with median per-error financial impact in the high six figures and tail events extending into the billions (the JP Morgan London Whale loss of $6.2 billion, traced to a copy-paste error in a Value-at-Risk spreadsheet, is the most-cited tail event). IBM’s Cost of a Data Breach report puts the average breach at $4.88 million, with roughly thirty percent now attributed to BYOD-era access patterns. Stanford HAI documents AI initiative failure rates between forty and sixty percent, against McKinsey’s median enterprise AI investment of $13 million.
These direct costs aggregate to substantial figures on their own. For a five-thousand-employee mid-market enterprise, the directly recoverable absorbed cost across Waves 1 through 3 typically runs $20 million to $35 million per year. That is before any compounding-wave analysis is applied. It is the floor.
The compounding wave is what sits on top of the floor — and on top of the AI investment specifically, where it produces the structural finding the framework converts into board-level vocabulary.
The Hansen Deflator Formula™ — the analytical contribution
The structural insight underneath the framework is that compounding cost is multiplicative, not additive. It cannot be measured by summing the waves’ direct costs. It has to be measured as a deflator — the percentage by which each prior wave’s unresolved condition reduces the realized value of the current wave’s investment.
The Hansen Deflator Formula™:
Realized AI value = (Theoretical AI value) × (Wave 1 substrate quality) × (Wave 2 substrate quality) × (Wave 3 substrate quality)
Theoretical AI value is what the original business case projected — the EBITDA impact, cost reduction, or revenue acceleration the AI initiative was designed to produce if the substrate were clean. Substrate quality is a coefficient between 0 and 1 representing the fraction of the prior wave’s data, infrastructure, or governance that is in a state the AI deployment can actually use.
For the typical mid-market enterprise, illustrative substrate-quality coefficients — derived from observed operational ranges and published benchmark synthesis, not universal constants — typically run as follows. Wave 1 substrate quality (the fraction of spreadsheet inputs that are clean) sits around 0.80, reflecting the eighty-eight percent error rate documented for two decades. Wave 2 substrate quality (the fraction of BYOD-era access patterns that are governable through current MDM and identity instruments) sits around 0.70. Wave 3 substrate quality (the fraction of SaaS data that is provenanced and integrated) sits around 0.65, reflecting the documented gap between sanctioned and shadow SaaS deployment.
Multiply those coefficients: 0.80 × 0.70 × 0.65 = 0.36.
That is the deflator. An AI deployment with a theoretical value of $10 million is producing $3.6 million of realized value. The remaining $6.4 million is being absorbed by the compounding shadow.
For the typical mid-market enterprise, the Hansen Deflator Formula™ produces a compounding deflator in the 0.30 to 0.45 range. That means an unmodified AI initiative is realizing thirty to forty-five percent of its theoretical value, with fifty-five to seventy percent absorbed by substrate-driven friction.
The Hansen Deflator Formula™ is the structural finding the framework converts into board-level vocabulary. The coefficients are not universal constants. They are illustrative ranges that Phase 0™ measures empirically for a specific enterprise’s substrate. The structural claim — that the compounding cost is multiplicative — holds whether the coefficients for any specific enterprise sit at the typical ranges, the high end, or the low end. What changes by enterprise is the size of the absorbed value loss, not whether the loss exists.
The board-level dollar figure
For a five-thousand-employee enterprise spending $13 million per year on AI initiatives — which is the McKinsey median for enterprises of that scale — with the typical four-wave shadow stack, the math produces a specific finding:
AI value being absorbed by the compounding shadow: $7 million to $9 million per year, on the AI line alone.
Add the directly measurable waste from the prior three waves (the SaaS waste, the spreadsheet error costs, the BYOD-era breach exposure), and the total enterprise drag from the unresolved compounding stack runs $25 million to $40 million per year for a mid-cap enterprise. The AI portion grows every quarter as deployment expands. The prior-wave portion is structurally stable until it is explicitly addressed.
That is a number a board can act on. It is not abstract. It is not theoretical. It is recoverable from existing financial systems plus a documented coefficient analysis. Every input is auditable. Every assumption is defensible. The aggregate finding is large enough that it cannot be reasonably ignored once it is surfaced.
For most boards, $25 million to $40 million per year of absorbed value loss is an item that, properly measured and presented, becomes a capital allocation priority. It is not a discretionary remediation question. It is a structural drag on enterprise value that affects the multiples being applied to the business.
Across the thirty-five-year period the framework spans, the cumulative cost of substrate inheritance compounds to a substantial fraction of total enterprise value. The companion chart shows three trajectories: the path under continued substrate inheritance, the path that continuous substrate diagnosis would have produced, and the recovery trajectory available from 2026 forward through Phase 0™ and the resolution architecture. The gap between the trajectories is what the framework is naming as the compounding shadow’s accumulated cost. The chart is illustrative, calibrated to a representative mid-cap enterprise — but the structural shape holds across enterprise scales, because the deflator math operates on the same multiplicative principle regardless of baseline size.
The other measurable C-suite-recognized impacts
Financial loss is the entry point but not the only measure that registers at the board level. The compounding wave produces additional impacts the C-suite recognizes and acts on, even when the dollar figure is harder to pin down precisely.
Audit and regulatory exposure is the fastest path to board-level engagement, because it converts structural problems into named legal liability. The compounding wave produces specific audit failures: SOX inability to defend financial statement assumptions traced through unprovenanced AI outputs; SEC AI-disclosure non-compliance under the 2024 guidance; EU AI Act high-risk-system documentation requirements that cannot be met when underlying training inputs lack provenance; GDPR data-lineage requirements that break at any prior-wave shadow boundary. Boards do not debate audit findings. They mandate remediation. A documented exposure map that surfaces audit-risk concentrations is the single most efficient board-engagement instrument available.
Cyber and breach risk concentration is the next layer. The compounding stack creates attack-surface multiplication that single-wave security tools do not measure. AI agents operating across BYOD endpoints with SaaS API access and spreadsheet inputs produce a threat surface that is the product of the four waves’ individual exposures, not the sum. The 2024 Verizon Data Breach Investigations Report shows that breaches involving multi-system credential pivots — which is exactly what AI-on-shadow-stack produces — have average dwell times forty percent longer and average costs sixty percent higher than single-system breaches. Cyber insurance carriers are starting to ask substrate-diagnostic questions in their renewal questionnaires. CISOs who cannot answer those questions face uninsurable risk concentrations.
Decision velocity erosion is the operational metric the COO and CEO feel most viscerally. The compounding stack does not just produce errors. It produces decision latency, because every cross-wave handoff requires reconciliation, every AI output requires manual provenance checking, every escalation requires assembling context from four substrate layers. Organizations report that time-to-decision on routine procurement, finance, and operational decisions has increased twenty to forty percent since AI deployment began, despite the AI being theoretically faster. The substrate is the bottleneck. The AI productivity narrative is being undermined by the substrate it is operating against.
Strategic optionality loss is the impact boards care about most when they understand it, even though it is the hardest to price precisely. The substrate constraint reduces the range of strategic moves the organization can make. M&A integration becomes harder when the acquired company’s substrate compounds with the acquirer’s substrate. New market entry becomes harder when the data lineage required to defend AI-driven decisions in a new jurisdiction does not exist. Vendor consolidation becomes harder when the contractually governable shadow has not been mapped. Boards care about optionality because optionality is the primary thing the CEO is paid to preserve, and the substrate condition is constraining optionality in ways that have not yet been named to most boards.
The deeper finding is that the four waves have a collective impact on strategic optionality, and for most enterprises this aggregate number has not been calculated — because no one has been tracking the four-wave impact as an integrated phenomenon. Acquisition optionality eroded by substrate incompatibility. Market-entry optionality foreclosed by data-lineage gaps. Vendor consolidation optionality blocked by unmapped operational dependencies. Each wave has been contributing to the aggregate erosion for thirty-five years, and the directional estimate for a mid-cap enterprise typically runs two to four times the absorbed value loss the deflator measures on AI specifically. The number is structurally large, structurally invisible in current financial reporting, and structurally tractable only when the four-wave attribution makes it traceable to specific substrate decisions made in specific waves. The framework’s measurement instrument for this cost category — the Hansen Optionality Loss Estimate™ — will be developed in a forthcoming post. The board-level point now is that the cost exists, it is material, and it has been compounding longer than most current AI governance frameworks have existed.
A further structural distinction is worth surfacing. The cost categories the framework measures so far — the realized value loss the deflator captures and the strategic optionality loss the forthcoming Hansen Optionality Loss Estimate™ will measure — are linear in their analytical structure. They aggregate continuously and remain at least partially recoverable through substrate remediation. A separate cost category exists in non-linear lost opportunity events: discrete strategic foreclosures that occurred at specific moments because the substrate condition prevented the enterprise from recognizing or acting on the window when it was open. Acquisitions that became impossible when a competitor moved first. Market windows that closed. Technology platform decisions that locked in substrate that subsequent waves could not retrofit. These events are typically much larger in magnitude than the linear loss categories, and they are structurally unrecoverable — substrate remediation going forward cannot recover an opportunity that has already passed. The framework will develop a measurement approach for non-linear loss in a separate piece. The board-level point now is that the linear cost categories the framework currently measures are likely the smaller portion of the total substrate inheritance.
Senior practitioner retention is the leading indicator that registers eighteen to twenty-four months ahead of the financial impact. Senior procurement, finance, IT, and operational practitioners — the people who actually do the work — know the substrate is broken. They are watching AI initiatives fail in real time and being asked to defend the failures with explanations they don’t believe. The cost is not visible in the compensation line. It is visible in the resignation rate of the most experienced people, which the CHRO reports to the board with explanations that do not name the substrate. Senior practitioner turnover is one of the cleanest leading indicators that the compounding stack is producing organizational damage. It precedes the financial impact, and it is recoverable from existing HR data.
Reputational and customer-trust risk is the impact that becomes visible to the board only when it has already produced customer-visible failures. When AI-driven decisions surface as wrong in customer-visible ways — a denied claim, a mispriced quote, a misrouted procurement, a hallucinated contract term — the customer does not blame the AI. They blame the brand. The compounding stack produces a higher rate of customer-visible AI errors than the AI deployment alone would predict, because the errors are inheriting Wave 1, 2, and 3 substrate problems the customer never sees. NPS declines and customer-effort score increases that correlate with AI deployment milestones are leading indicators of substrate-driven trust erosion.
Enterprise risk-adjusted EBITDA is the metric that integrates the others into the language the board uses for capital allocation decisions. If the compounding shadow is absorbing $25 million to $40 million of value per year on a $200 million EBITDA base, the risk-adjusted EBITDA is materially overstated, and the multiples being used to value the business are calibrated to the wrong number. For public companies, this becomes an SEC-disclosable item once the framework is applied with rigor. For private companies, it becomes a transaction-readiness item that affects valuation in any contemplated liquidity event. Either way, it is a metric the board cannot afford to leave unmapped.
What the board should ask
The question the board needs to ask is no longer “how do we govern AI?”
It is: What is the cumulative shadow exposure our AI deployment will inherit, what is the realized-value drag the compounding wave is currently producing, and which combination of disciplines is going to address each layer of it?
That question changes the scope of work, the sequence of engagements, and the criteria for evaluating success. It also rules out almost every off-the-shelf AI readiness or shadow IT remediation offering currently on the market, because none of them are scoped to the multi-wave reality and none of them measure the compounding deflator.
The Compounding Technology Shadow Wave™ is the structural condition. The deflator is the financial measurement. Phase 0™ surfaces the substrate condition before the AI deployment lands on top of it. The two-tier resolution architecture — diagnostic-plus-relational governance plus the four operational disciplines — converts the surfaced exposure into resolvable handoffs.
What changes in 2026 is the cadence at which the absorbed cost becomes visible. What does not change is the underlying structure that has been producing the absorbed cost for thirty-five years.
The board’s question is not whether the compounding shadow exists. The evidence is documented across nineteen years of contemporaneous archive material and is corroborated by every major AI failure rate study currently in print. The board’s question is whether to surface the cost in the current capital allocation cycle or to absorb it into another year of overstated risk-adjusted EBITDA and unrealized AI value.
The math is recoverable. The deflator is calculable. The boards that surface this in 2026 will be operating against a measured substrate. The boards that absorb it for another year will be deferring a structural drag that grows every quarter as agentic deployment expands the surface across which the compounding wave produces consequences.
The financial argument is now on the page. The capital allocation question follows.
And to the headline question: yes, the failure of most current AI initiatives was substantially decided by what was done in 2016 — and in 2012, and 2007, and 1990 — when each prior wave’s substrate was institutionalized without the diagnostic instruments needed to surface what would later become the compounding cost. That history is the framework’s Real-World Condition Substrate™. The substrate cannot be unwound retroactively. It can be diagnosed, scoped, and resolved going forward, which is what changes the realized value trajectory of the AI initiative the board is currently funding.
What was decided in 2016 is the substrate the AI is now landing on. What gets decided in 2026 is whether the substrate is surfaced before the AI deployment compounds with it, or after. That decision, alone, can move the deflator from 0.36 to 0.55 within nine months — which does not require new AI models. It requires surfacing and remediating enough of the Wave 1 through 3 shadow stack that the substrate quality coefficients move from 0.80, 0.70, and 0.65 toward 0.90, 0.80, and 0.75. The math is straightforward. The recovery is enough realized value to fund the next strategic move the substrate condition is currently constraining.
The boards that act on this in the current capital allocation cycle will not eliminate the substrate inheritance. They will surface it, scope it, and put it on a documented remediation trajectory. That is what changes the math. That is what makes the next AI initiative produce something closer to its theoretical value rather than something closer to a third of it.
The question the headline asks is structural. The answer the framework provides is operational.
—30—
Compounding Technology Shadow Wave™ · Hansen Deflator Formula™ · Hansen Optionality Loss Estimate™ · Phase 0™ · Hansen Fit Score™ (HFS™) · RAM 2025™ · Real-World Condition Substrate™ · Strand Commonality™ · ARA™ · Implementation Physics™ Hansen Models™ · Founder: Jon W. Hansen · hansenprocurement.com
Is The Failure Of Your AI Initiative Already Decided By What You Did In 2016?
Posted on May 9, 2026
0
The Compounding Technology Shadow Wave™, the deflator math, and the financial argument for the CFO and the board.”
The question in the headline is the structural one this piece exists to answer. The answer, for most enterprises currently funding AI initiatives, is yes — the failure is already substantially decided, and the deciding decisions were made in the years before AI was on the board’s agenda. The structural reason is what the Compounding Technology Shadow Wave™ framework names. The financial cost is what this piece quantifies.
By 2016, three of the four technology waves were operationally significant in most enterprises. Spreadsheets had been load-bearing for twenty-six years. BYOD was nine years into adoption and approaching saturation. SaaS sprawl was four years into its trajectory and was already producing the integration patterns that would later be diagnosed as the SaaS rationalization problem. Most current AI initiatives are landing on substrate decisions that were finalized roughly in the 2014 to 2018 window. The architecture decisions, the contract terms, the access patterns, the data hygiene practices, and the governance instruments that are now constraining AI deployment outcomes were calibrated for a pre-AI environment that no longer exists.
That is what makes the AI failure rate structurally predetermined for most enterprises. Not the AI itself. The substrate the AI is being deployed against, which was built when AI was not yet a factor in any of the relevant decisions.
This piece converts that structural diagnosis into the financial language the CFO and the board operate in.
The conversion is necessary because the diagnostic argument, as structurally accurate as it is, does not on its own commission action. Boards act on dollars, on risk-adjusted EBITDA, on audit findings, on capital allocation defensibility. The argument that the enterprise has a multi-wave substrate condition is true, but the structural truth does not move the engagement until it is expressed in the financial vocabulary the board uses to allocate capital.
What follows is that conversion. There is a formula at the center of it that is novel enough to be quotable, defensible enough to be auditable, and large enough at the dollar-figure level to be unignorable at the board level.
The two-condition test that frames the financial argument
Stephen Ashcroft’s contribution to the framework — surfaced in the comment thread on the Compounding Technology Shadow Wave™ post — is the load-bearing test for whether any enterprise governance posture is actually working. Governance only works when it runs at delivery pace. To which the framework adds: governance only works when it operates against a recognized substrate. Both conditions must be met. Neither is sufficient alone.
A governance instrument that runs at delivery pace against an unrecognized substrate is enforcing rules against an assumed reality that does not match operational reality. Compliance reports look clean; the operational activity is unconstrained. A governance instrument that recognizes the substrate but runs on quarterly cadence is identifying conditions correctly but cannot constrain them at the speed they manifest. Compliance reports identify exposure; remediation arrives too late to matter.
Most current AI governance configurations fail at least one of these tests. AI governance platforms tend to operate at delivery pace but treat the substrate as AI-only, missing the four-wave compounding. Substrate-aware programs (shadow IT remediation, SaaS rationalization) tend to operate on procurement-cycle cadence, recognizing exposure correctly but failing to constrain it operationally. The configurations that satisfy both conditions are structurally rare, which is why the financial cost of the substrate condition is large enough to warrant board-level attention.
That financial cost is what this piece quantifies.
The direct cost components — already on the books
Start with what is already absorbed into existing financial statements. Each of the four waves produces measurable direct costs that sit in the financial systems today, recoverable without new methodology.
Wave 3 SaaS waste is the cleanest single number. Industry benchmarks put unused or underutilized SaaS licenses at twenty-five to forty-four percent of total SaaS spend. With average per-employee SaaS spend at $9,643 in 2026, a five-thousand-employee enterprise carries between $12 million and $21 million per year in directly identifiable SaaS waste. Recoverable from the procurement spend file in an afternoon.
Wave 1 spreadsheet errors, Wave 2 BYOD-era breach exposure, and Wave 4 AI implementation failure costs are similarly documented across published research. Forrester and Ventana put material spreadsheet errors in eighty-eight percent of business spreadsheets, with median per-error financial impact in the high six figures and tail events extending into the billions (the JP Morgan London Whale loss of $6.2 billion, traced to a copy-paste error in a Value-at-Risk spreadsheet, is the most-cited tail event). IBM’s Cost of a Data Breach report puts the average breach at $4.88 million, with roughly thirty percent now attributed to BYOD-era access patterns. Stanford HAI documents AI initiative failure rates between forty and sixty percent, against McKinsey’s median enterprise AI investment of $13 million.
These direct costs aggregate to substantial figures on their own. For a five-thousand-employee mid-market enterprise, the directly recoverable absorbed cost across Waves 1 through 3 typically runs $20 million to $35 million per year. That is before any compounding-wave analysis is applied. It is the floor.
The compounding wave is what sits on top of the floor — and on top of the AI investment specifically, where it produces the structural finding the framework converts into board-level vocabulary.
The Hansen Deflator Formula™ — the analytical contribution
The structural insight underneath the framework is that compounding cost is multiplicative, not additive. It cannot be measured by summing the waves’ direct costs. It has to be measured as a deflator — the percentage by which each prior wave’s unresolved condition reduces the realized value of the current wave’s investment.
The Hansen Deflator Formula™:
Theoretical AI value is what the original business case projected — the EBITDA impact, cost reduction, or revenue acceleration the AI initiative was designed to produce if the substrate were clean. Substrate quality is a coefficient between 0 and 1 representing the fraction of the prior wave’s data, infrastructure, or governance that is in a state the AI deployment can actually use.
For the typical mid-market enterprise, illustrative substrate-quality coefficients — derived from observed operational ranges and published benchmark synthesis, not universal constants — typically run as follows. Wave 1 substrate quality (the fraction of spreadsheet inputs that are clean) sits around 0.80, reflecting the eighty-eight percent error rate documented for two decades. Wave 2 substrate quality (the fraction of BYOD-era access patterns that are governable through current MDM and identity instruments) sits around 0.70. Wave 3 substrate quality (the fraction of SaaS data that is provenanced and integrated) sits around 0.65, reflecting the documented gap between sanctioned and shadow SaaS deployment.
Multiply those coefficients: 0.80 × 0.70 × 0.65 = 0.36.
That is the deflator. An AI deployment with a theoretical value of $10 million is producing $3.6 million of realized value. The remaining $6.4 million is being absorbed by the compounding shadow.
For the typical mid-market enterprise, the Hansen Deflator Formula™ produces a compounding deflator in the 0.30 to 0.45 range. That means an unmodified AI initiative is realizing thirty to forty-five percent of its theoretical value, with fifty-five to seventy percent absorbed by substrate-driven friction.
The Hansen Deflator Formula™ is the structural finding the framework converts into board-level vocabulary. The coefficients are not universal constants. They are illustrative ranges that Phase 0™ measures empirically for a specific enterprise’s substrate. The structural claim — that the compounding cost is multiplicative — holds whether the coefficients for any specific enterprise sit at the typical ranges, the high end, or the low end. What changes by enterprise is the size of the absorbed value loss, not whether the loss exists.
The board-level dollar figure
For a five-thousand-employee enterprise spending $13 million per year on AI initiatives — which is the McKinsey median for enterprises of that scale — with the typical four-wave shadow stack, the math produces a specific finding:
AI value being absorbed by the compounding shadow: $7 million to $9 million per year, on the AI line alone.
Add the directly measurable waste from the prior three waves (the SaaS waste, the spreadsheet error costs, the BYOD-era breach exposure), and the total enterprise drag from the unresolved compounding stack runs $25 million to $40 million per year for a mid-cap enterprise. The AI portion grows every quarter as deployment expands. The prior-wave portion is structurally stable until it is explicitly addressed.
That is a number a board can act on. It is not abstract. It is not theoretical. It is recoverable from existing financial systems plus a documented coefficient analysis. Every input is auditable. Every assumption is defensible. The aggregate finding is large enough that it cannot be reasonably ignored once it is surfaced.
For most boards, $25 million to $40 million per year of absorbed value loss is an item that, properly measured and presented, becomes a capital allocation priority. It is not a discretionary remediation question. It is a structural drag on enterprise value that affects the multiples being applied to the business.
Across the thirty-five-year period the framework spans, the cumulative cost of substrate inheritance compounds to a substantial fraction of total enterprise value. The companion chart shows three trajectories: the path under continued substrate inheritance, the path that continuous substrate diagnosis would have produced, and the recovery trajectory available from 2026 forward through Phase 0™ and the resolution architecture. The gap between the trajectories is what the framework is naming as the compounding shadow’s accumulated cost. The chart is illustrative, calibrated to a representative mid-cap enterprise — but the structural shape holds across enterprise scales, because the deflator math operates on the same multiplicative principle regardless of baseline size.
The other measurable C-suite-recognized impacts
Financial loss is the entry point but not the only measure that registers at the board level. The compounding wave produces additional impacts the C-suite recognizes and acts on, even when the dollar figure is harder to pin down precisely.
Audit and regulatory exposure is the fastest path to board-level engagement, because it converts structural problems into named legal liability. The compounding wave produces specific audit failures: SOX inability to defend financial statement assumptions traced through unprovenanced AI outputs; SEC AI-disclosure non-compliance under the 2024 guidance; EU AI Act high-risk-system documentation requirements that cannot be met when underlying training inputs lack provenance; GDPR data-lineage requirements that break at any prior-wave shadow boundary. Boards do not debate audit findings. They mandate remediation. A documented exposure map that surfaces audit-risk concentrations is the single most efficient board-engagement instrument available.
Cyber and breach risk concentration is the next layer. The compounding stack creates attack-surface multiplication that single-wave security tools do not measure. AI agents operating across BYOD endpoints with SaaS API access and spreadsheet inputs produce a threat surface that is the product of the four waves’ individual exposures, not the sum. The 2024 Verizon Data Breach Investigations Report shows that breaches involving multi-system credential pivots — which is exactly what AI-on-shadow-stack produces — have average dwell times forty percent longer and average costs sixty percent higher than single-system breaches. Cyber insurance carriers are starting to ask substrate-diagnostic questions in their renewal questionnaires. CISOs who cannot answer those questions face uninsurable risk concentrations.
Decision velocity erosion is the operational metric the COO and CEO feel most viscerally. The compounding stack does not just produce errors. It produces decision latency, because every cross-wave handoff requires reconciliation, every AI output requires manual provenance checking, every escalation requires assembling context from four substrate layers. Organizations report that time-to-decision on routine procurement, finance, and operational decisions has increased twenty to forty percent since AI deployment began, despite the AI being theoretically faster. The substrate is the bottleneck. The AI productivity narrative is being undermined by the substrate it is operating against.
Strategic optionality loss is the impact boards care about most when they understand it, even though it is the hardest to price precisely. The substrate constraint reduces the range of strategic moves the organization can make. M&A integration becomes harder when the acquired company’s substrate compounds with the acquirer’s substrate. New market entry becomes harder when the data lineage required to defend AI-driven decisions in a new jurisdiction does not exist. Vendor consolidation becomes harder when the contractually governable shadow has not been mapped. Boards care about optionality because optionality is the primary thing the CEO is paid to preserve, and the substrate condition is constraining optionality in ways that have not yet been named to most boards.
The deeper finding is that the four waves have a collective impact on strategic optionality, and for most enterprises this aggregate number has not been calculated — because no one has been tracking the four-wave impact as an integrated phenomenon. Acquisition optionality eroded by substrate incompatibility. Market-entry optionality foreclosed by data-lineage gaps. Vendor consolidation optionality blocked by unmapped operational dependencies. Each wave has been contributing to the aggregate erosion for thirty-five years, and the directional estimate for a mid-cap enterprise typically runs two to four times the absorbed value loss the deflator measures on AI specifically. The number is structurally large, structurally invisible in current financial reporting, and structurally tractable only when the four-wave attribution makes it traceable to specific substrate decisions made in specific waves. The framework’s measurement instrument for this cost category — the Hansen Optionality Loss Estimate™ — will be developed in a forthcoming post. The board-level point now is that the cost exists, it is material, and it has been compounding longer than most current AI governance frameworks have existed.
A further structural distinction is worth surfacing. The cost categories the framework measures so far — the realized value loss the deflator captures and the strategic optionality loss the forthcoming Hansen Optionality Loss Estimate™ will measure — are linear in their analytical structure. They aggregate continuously and remain at least partially recoverable through substrate remediation. A separate cost category exists in non-linear lost opportunity events: discrete strategic foreclosures that occurred at specific moments because the substrate condition prevented the enterprise from recognizing or acting on the window when it was open. Acquisitions that became impossible when a competitor moved first. Market windows that closed. Technology platform decisions that locked in substrate that subsequent waves could not retrofit. These events are typically much larger in magnitude than the linear loss categories, and they are structurally unrecoverable — substrate remediation going forward cannot recover an opportunity that has already passed. The framework will develop a measurement approach for non-linear loss in a separate piece. The board-level point now is that the linear cost categories the framework currently measures are likely the smaller portion of the total substrate inheritance.
Senior practitioner retention is the leading indicator that registers eighteen to twenty-four months ahead of the financial impact. Senior procurement, finance, IT, and operational practitioners — the people who actually do the work — know the substrate is broken. They are watching AI initiatives fail in real time and being asked to defend the failures with explanations they don’t believe. The cost is not visible in the compensation line. It is visible in the resignation rate of the most experienced people, which the CHRO reports to the board with explanations that do not name the substrate. Senior practitioner turnover is one of the cleanest leading indicators that the compounding stack is producing organizational damage. It precedes the financial impact, and it is recoverable from existing HR data.
Reputational and customer-trust risk is the impact that becomes visible to the board only when it has already produced customer-visible failures. When AI-driven decisions surface as wrong in customer-visible ways — a denied claim, a mispriced quote, a misrouted procurement, a hallucinated contract term — the customer does not blame the AI. They blame the brand. The compounding stack produces a higher rate of customer-visible AI errors than the AI deployment alone would predict, because the errors are inheriting Wave 1, 2, and 3 substrate problems the customer never sees. NPS declines and customer-effort score increases that correlate with AI deployment milestones are leading indicators of substrate-driven trust erosion.
Enterprise risk-adjusted EBITDA is the metric that integrates the others into the language the board uses for capital allocation decisions. If the compounding shadow is absorbing $25 million to $40 million of value per year on a $200 million EBITDA base, the risk-adjusted EBITDA is materially overstated, and the multiples being used to value the business are calibrated to the wrong number. For public companies, this becomes an SEC-disclosable item once the framework is applied with rigor. For private companies, it becomes a transaction-readiness item that affects valuation in any contemplated liquidity event. Either way, it is a metric the board cannot afford to leave unmapped.
What the board should ask
The question the board needs to ask is no longer “how do we govern AI?”
It is: What is the cumulative shadow exposure our AI deployment will inherit, what is the realized-value drag the compounding wave is currently producing, and which combination of disciplines is going to address each layer of it?
That question changes the scope of work, the sequence of engagements, and the criteria for evaluating success. It also rules out almost every off-the-shelf AI readiness or shadow IT remediation offering currently on the market, because none of them are scoped to the multi-wave reality and none of them measure the compounding deflator.
The Compounding Technology Shadow Wave™ is the structural condition. The deflator is the financial measurement. Phase 0™ surfaces the substrate condition before the AI deployment lands on top of it. The two-tier resolution architecture — diagnostic-plus-relational governance plus the four operational disciplines — converts the surfaced exposure into resolvable handoffs.
What changes in 2026 is the cadence at which the absorbed cost becomes visible. What does not change is the underlying structure that has been producing the absorbed cost for thirty-five years.
The board’s question is not whether the compounding shadow exists. The evidence is documented across nineteen years of contemporaneous archive material and is corroborated by every major AI failure rate study currently in print. The board’s question is whether to surface the cost in the current capital allocation cycle or to absorb it into another year of overstated risk-adjusted EBITDA and unrealized AI value.
The math is recoverable. The deflator is calculable. The boards that surface this in 2026 will be operating against a measured substrate. The boards that absorb it for another year will be deferring a structural drag that grows every quarter as agentic deployment expands the surface across which the compounding wave produces consequences.
The financial argument is now on the page. The capital allocation question follows.
And to the headline question: yes, the failure of most current AI initiatives was substantially decided by what was done in 2016 — and in 2012, and 2007, and 1990 — when each prior wave’s substrate was institutionalized without the diagnostic instruments needed to surface what would later become the compounding cost. That history is the framework’s Real-World Condition Substrate™. The substrate cannot be unwound retroactively. It can be diagnosed, scoped, and resolved going forward, which is what changes the realized value trajectory of the AI initiative the board is currently funding.
What was decided in 2016 is the substrate the AI is now landing on. What gets decided in 2026 is whether the substrate is surfaced before the AI deployment compounds with it, or after. That decision, alone, can move the deflator from 0.36 to 0.55 within nine months — which does not require new AI models. It requires surfacing and remediating enough of the Wave 1 through 3 shadow stack that the substrate quality coefficients move from 0.80, 0.70, and 0.65 toward 0.90, 0.80, and 0.75. The math is straightforward. The recovery is enough realized value to fund the next strategic move the substrate condition is currently constraining.
The boards that act on this in the current capital allocation cycle will not eliminate the substrate inheritance. They will surface it, scope it, and put it on a documented remediation trajectory. That is what changes the math. That is what makes the next AI initiative produce something closer to its theoretical value rather than something closer to a third of it.
The question the headline asks is structural. The answer the framework provides is operational.
—30—
Compounding Technology Shadow Wave™ · Hansen Deflator Formula™ · Hansen Optionality Loss Estimate™ · Phase 0™ · Hansen Fit Score™ (HFS™) · RAM 2025™ · Real-World Condition Substrate™ · Strand Commonality™ · ARA™ · Implementation Physics™ Hansen Models™ · Founder: Jon W. Hansen · hansenprocurement.com
Share this:
Related