Reimagining The Familiar (and Outdated) Iceberg Graphic

Posted on May 1, 2026

0


Why most discussions of agentic AI, digital transformation, and operating-model reform are arguing about 10% of the system — and what the other 90% looks like when you actually draw it.


If you have spent any time on LinkedIn over the past year, you have probably seen some version of the 10-20-70 iceberg. Andreas Horn’s framing of it — 10% algorithms above the waterline, 20% context and data in the upper submerged section, 70% people and process at the deepest section — is one of the cleaner articulations, and the model has become near-canonical in conversations about agentic AI deployment.

It is a useful model. It is also incomplete in a way that matters more than the model itself acknowledges.

What the Iceberg Graphic Gets Right

The conventional iceberg metaphor does real argumentative work. It makes the visible-versus-invisible distinction immediately intuitive. Algorithms get the headlines because they sit above the waterline. Context, data, people, and process do most of the actual work, and they sit below the waterline where most observers do not see them. Anyone who has watched an enterprise AI initiative fail despite an excellent algorithm has absorbed the iceberg’s central claim. The work below the waterline matters more than the work above it.

What the model does not address is everything that is not the iceberg.

What the Iceberg Graphic Quietly Assumes

Every iceberg in the world floats in an ocean. Every ocean in the world is held by a seabed. Remove the seabed and the ocean does not exist; remove the ocean and the iceberg does not exist. The conventional 10-20-70 model treats the ocean and the seabed as constants — present, stable, not worth discussing. The entire debate happens inside the iceberg.

This is the assumption that has produced thirty years of strategic-conversation déjà vu. We argue about the ratio of algorithms to context to people-and-process. We adjust the percentages. We redefine the categories. We propose new submerged layers. None of these adjustments change whether the iceberg can hold its position when conditions shift. None of them ask whether the ocean is the ocean we think it is. None of them ask whether the seabed is structurally sound enough to hold the ocean.

The iceberg can be structured any way you want. It can have ten percent algorithms or thirty percent or fifty. The internal composition does not determine whether the iceberg exists. The medium that holds the iceberg does, and the foundation that holds the medium does. Both of those structural elements are missing from the conventional graphic.

The Reimagined Graphic

The version I have built keeps the iceberg the same size as the original — because the iceberg is what everyone already argues about, and the recognition of that scale is part of the point. What changes is what surrounds the iceberg.

The 90% Framework™ — the iceberg accounts for 10% of the system; the ocean (real-world conditions) and the seabed (Phase 0™ validation discipline) account for the other 90%. The math reconciles: 1 + 2 + 7 + 45 + 45 = 100.

The iceberg now accounts for ten percent of the system. Internally, the conventional iceberg preserves Andreas’s ratios — algorithms, context and data, people and process — but at one percent, two percent, and seven percent of the actual decision space rather than ten, twenty, and seventy. Whatever ratio you prefer to argue about inside the iceberg, the iceberg itself accounts for ten percent of what determines outcomes.

The ocean accounts for forty-five percent. The ocean is the real world — the conditions the iceberg must operate in and cannot control. Supply disruptions. Regulatory shifts. Counterparty failures. Stakeholder behaviors that drift from charter commitments. Workflow timing that does not match what the org chart suggests. Incentive distortions that nobody has surfaced because nobody is looking for them. The ocean is everything the iceberg is exposed to and cannot manage from inside its own structure.

The seabed accounts for the other forty-five percent. The seabed is the discipline of validating the iceberg against the ocean before the iceberg is committed to the ocean. Phase 0™ is the operational name for that discipline. Assumption testing, real-world condition mapping, incentive distortion analysis, timing and workflow reality checks. The seabed determines whether the medium that supports the iceberg can hold under shifting conditions.

The math reconciles cleanly. One percent algorithms, two percent context and data, seven percent people and process, forty-five percent real-world conditions, forty-five percent validation discipline. The system adds to one hundred percent.

Why This Matters Beyond the Iceberg

This reframing applies to substantially more than agentic AI. Every major enterprise debate of the past three decades has happened inside an iceberg while the ocean and seabed went unmeasured.

ERP modernization arguments are iceberg arguments. The internal composition of the ERP — modules, integrations, customizations, governance — has been the substance of the debate since the late 1990s. The real-world conditions the ERP would have to operate in (supplier behavior, regulatory drift, organizational misalignment) and the discipline of validating the ERP against those conditions before commitment have been treated as constants. They were not constants. The Hershey, FoxMeyer, MFI, HP, Cadbury, and King County failures were ocean and seabed failures, not iceberg failures. The icebergs were technically sound. The medium they were committed to was not what the implementations assumed it was.

AI strategy arguments are iceberg arguments. The internal composition of the AI stack — model selection, training data, governance frameworks, prompt engineering, agent orchestration — is the substance of every analyst report and consulting deck published in 2026. The real-world conditions the AI workflows will have to operate in and the discipline of validating those workflows against those conditions before commitment are treated as constants. They are not constants. The replicated lakehouse failures unfolding right now in 2026 are ocean and seabed failures, not iceberg failures.

Digital transformation arguments are iceberg arguments. Operating model reform arguments are iceberg arguments. Every major procurement-technology platform debate of the past two decades has been an iceberg argument, including the SAP API policy debate that has dominated the past month.

The iceberg is what everyone argues about. The ocean and the seabed are what determine whether the iceberg can hold position. The conventional 10-20-70 model — and the conventional procurement-technology discourse, and the conventional AI-strategy discourse — addresses ten percent of the system. The other ninety percent is the work that has not yet been named.

Fifty Years of Icebergs Without Seabeds

The 10-20-70 procurement-technology iceberg is the latest in a long line. David McClelland drew one for competencies in 1973, in the foundational paper that launched the entire competency-modeling discipline. Edward Hall drew one for organizational culture in 1976, distinguishing visible cultural artifacts from the deeper structures that produced them. Peter Senge drew one for systems thinking in 1990, separating events from patterns from underlying structures. Spencer and Spencer elaborated the McClelland version in 1993, formalizing the surface-versus-depth distinction that has informed performance management ever since.

The metaphor has carried structural arguments about visible-versus-invisible work in organizations for more than fifty years. None of those iterations drew the seabed. None of them drew the ocean. The conventional iceberg has been incomplete in the same way for half a century, across categorically different domains — competency modeling, cultural analysis, systems thinking, performance management, and now agentic AI. The seabed and the ocean have been missing from the diagram every time.

That consistency matters. If the iceberg-without-seabed pattern were limited to one domain or one consultant or one decade, it could be dismissed as a local oversight. It is not a local oversight. It is a fifty-year intellectual habit that has shaped how organizations visualize complex systems across multiple disciplines, and the habit has been wrong in the same structural way every time.

Seven Eras of Documented Failure

This is not a theoretical claim. The Procurement Insights archive has tracked the rate of initiative failure across seven distinct technology eras, with continuous coverage from 2007 forward and primary-source documentation extending back to the late 1990s through the 2008 CATA Alliance white paper. The pattern is consistent enough across all seven eras to constitute longitudinal and contemporaneous proof that the iceberg as originally envisioned has never been applicable, because the iceberg as originally envisioned ignored the seabed and the ocean.

The seven eras are these. Centralized mainframe ERP in the late 1980s and early 1990s. Client-server ERP (R/3, PeopleSoft, Oracle Financials) from the mid-1990s through the early 2000s. Bolt-on best-of-breed from the late 1990s through the mid-2000s, the era of Hershey, FoxMeyer, MFI, HP, and Cadbury. Early SaaS and on-demand from the mid-2000s through the early 2010s, the era of the original Ariba SaaS rollout and the launch of Coupa, on which I authored what is likely one the industry’s earliest independent white paper in 2009. Cloud-native procurement from the early 2010s through the late 2010s, the era of large-scale cloud migration and mobile-first procurement applications. AI-augmented and first-wave automation from the late 2010s through the early 2020s, the era of robotic process automation, predictive analytics, and early machine learning deployments. And the current era, agentic AI and replicated lakehouses, which began materializing in 2024 and is now defining 2026.

Each era had its own iceberg-style conversation about internal composition. Each era produced documented failures at consistent rates — on the order of fifty-five to seventy-five percent of major initiatives, depending on which research methodology is applied and which technology era is being measured. The rate has been roughly constant across all seven eras, despite the underlying technology in each era being categorically different from the technology of the era before. That consistency is the empirical signal. If the failures were caused by the iceberg’s internal composition — the wrong algorithms, the wrong data architecture, the wrong change management — different eras with different internal compositions would produce different failure rates. They have not. The failure rate has been roughly constant for thirty years.

The constant is not the iceberg. The constant is the absence of the seabed and the ocean from the analytical frame. Each era arrived believing that this generation’s iceberg would succeed where the prior generation’s iceberg had failed. Each era discovered that the iceberg was not the determinative variable. The medium the iceberg had to operate in had shifted, the validation discipline that would have surfaced the shift had not been applied, and the iceberg failed for the same structural reason every prior iceberg had failed.

This is what longitudinal evidence looks like when the discipline finally produces it. Seven eras. Three decades. Continuous independent coverage. One consistent finding. The conventional iceberg model is not wrong; it is just describing the wrong thing.

The Reimagined Graphic Is Not the Last Word

The graphic I have built is one articulation of a structural claim that the discipline has been quietly waiting for someone to make. There will be other articulations. There will be sharper ratios, better metaphors, more precise vocabulary. The point is not the specific graphic. The point is that any honest reframing of how enterprise systems actually succeed or fail has to make the ocean and the seabed visible, with their own labels, with their own percentages, and at scale matching the iceberg they surround.

When that happens, the conversations that have dominated the past fifteen years stop being the determinative conversations. They become one tenth of the determinative conversations. The other nine tenths is where the work is.

Phase 0™ is the operational name for the seabed. The Real-World Condition Substrate™ is the operational name for the medium against which Phase 0 validates. Together they account for ninety percent of whether the iceberg can hold its position when conditions shift.

The familiar iceberg graphic was not wrong. It was just the only graphic the discipline had. It is no longer the only graphic the discipline has.


Phase 0™ · Hansen Fit Score™ (HFS™) · ARA™ · RAM 2025™ · Real-World Condition Substrate™ Hansen Models™ · Founder: Jon W. Hansen · hansenprocurement.com

-30-

Posted in: Commentary