Jon W. Hansen — Hansen Models™ · Procurement Insights™ — May 2026
“The argument was made before the conditions that would test it existed. The conditions have now arrived.”
— Jon W. Hansen, Implementation Physics™
In 2004, I published a paper titled Technology’s Diminishing Role In An Emerging Process-Driven World. The paper argued that as enterprise technology capability continued to advance, the variables that actually determined whether technology investments produced operational value were shifting — away from the technology itself, and toward the process integrity, organizational readiness, and operating conditions that surrounded the deployment.
The paper was largely ignored.
It was published into a market that was buying ERP optimism and accelerating SaaS enthusiasm. AI was not yet in enterprise consciousness. Process-centric thinking existed but mostly at the operational level, not the strategic one. Arguing that the limiting variable was moving away from technology capability ran against everything the contemporaneous discourse was rewarding.
Twenty-two years later, the data has caught up.
This post is not about the paper being right. It is about what twenty-two years of technology-era progression has now made operationally visible — and about whether the discourse is finally able to engage what the structural argument has been documenting since before AI was a category.
What the 2004 Paper Actually Argued
The paper’s central claim was that enterprise technology capability had reached a level where additional capability advances were producing diminishing operational returns, because the limiting variable was no longer in the technology layer itself. The variable was in the process layer that surrounded technology — the alignment between the technology’s design assumptions and the operating conditions the technology would actually encounter.
The paper made this argument before the conditions that would test it arrived. ERP was still being measured by feature parity. SaaS was being measured by subscription growth. The cloud era had not yet begun to challenge the assumption that the next platform would solve what the previous platform had not.
The structural claim that process integrity was overtaking technology capability as the dominant variable was, in 2004, ahead of where the market was prepared to engage.
What the paper did not predict — and what no analyst in 2004 could have predicted — was the specific mechanism through which the structural claim would eventually become operationally visible. That mechanism is the AI deployment dynamic the May 6 post documented.
AI is the first technology era in which the consequences of un-validated commitments arrive in months rather than years, inside the executive’s tenure window rather than after it. The compression of time-to-consequence is what finally forces the discourse to engage what the 2004 paper named.
What the Three Charts Now Show
The structural argument the 2004 paper made is now visualizable in a way it was not at the time. Three charts, each adding a layer of structural evidence, show what twenty-two years of independent technology and outcomes data have produced.
The three-line chart is the simplest visualization of the 2004 thesis. The blue line tracks technology capability across thirty years and seven technology eras. The gold line tracks initiative success rate across the same period. The two lines have moved independently. Technology capability has risen from roughly 15% to nearly 100%. The 65–80% failure rate has remained constant across all seven technology eras the franchise has documented — from mainframe ERP and client-server systems through bolt-on best-of-breed, SaaS, cloud, AI-augmented automation, and Agentic AI.
This is the empirical floor of the 2004 thesis. If technology capability were the dominant variable, the success rate line should have moved with it. The success rate line did not move. The variable that determined success was somewhere else — in the layer the 2004 paper identified as process integrity.
The four-line chart adds two structural variables that the 2004 paper could not have foreseen but that operationalize its consequence. The red line is the executive accountability window — the time between an un-validated technology commitment and the moment its operational consequences become visible. In 1995, that window was approximately 36 months. In 2025, it is approximately 2 months.
The dashed purple line is average CEO tenure. In 1995, CEO tenure was approximately 9.5 years. In 2025, it is approximately 6.5 years.
The structural significance of these two lines moving simultaneously is this: the consequences of un-validated commitments now land inside the tenure window of the executive who made them. The cushion that previous technology eras provided — the gap between commitment and consequence that allowed for course correction, reassignment, or graceful exit — no longer exists.
The executive who commits to the AI deployment is the executive who will face whatever the deployment produces.
The five-line chart adds a second tenure line — the average tenure of the non-CEO C-Suite (CFO, CIO, CMO, CHRO). That line sits two to three years below the CEO line throughout the period, ending at approximately 4.4 years in 2025. The team the CEO is leading through AI commitment decisions has, on average, less time before the consequences land inside their windows than the CEO has before they land inside theirs.
The argument the chart progression makes, taken together, is structurally what the 2004 paper predicted in operational form. As technology capability advanced, the variables that determined outcome would shift to the process and organizational layers. The chart shows the shift. The chart also shows what the 2004 paper could not predict — that the consequences of failure in those layers would eventually arrive on a timeline that put the executive’s career inside the consequence window.
The Structural Pivot
The line that names this most precisely came from a multimodel reading of the chart progression and the 2004 paper context, conducted as part of the franchise’s ARA™ RAM 2025™ multimodel validation methodology. The line is this:
Technology did not diminish because it became weaker. It diminished because the real constraint moved somewhere else.
This is the structural pivot the discourse has been refusing to make. The capability narrative — the next platform will solve what the previous platform did not — has been the dominant frame for thirty years because the discourse has been buying capability rather than diagnosing constraint. As long as the next platform was framed as the answer, the question of where the actual constraint lived did not need to be engaged.
The constraint moved a long time ago. The 2004 paper named the move when it was happening. The chart now visualizes where it moved to and what it produces.
In the AI era, an un-validated commitment can now destroy a career, a balance sheet, or an entire vendor relationship — inside a single executive tenure, sometimes inside a single quarter.
Where it moved to is the process layer — the layer that determines whether technology commitments are tested against documented historical pattern before they scale, whether organizational conditions can hold what the technology requires of them, whether the substrate exists for executives to verify their decisions before the consequences arrive to test them.
Phase 0™, ARA™, and RAM 2025™ multimodel validation methodology were built to operate inside this layer — to treat validation as a standalone discipline rather than as a feature the next platform will eventually include.
What it produces is the dynamic the May 6 post documented. Senior executives making AI commitment decisions under simultaneous career exposure, audit exposure, vendor decision exposure, project delivery exposure, and financial return exposure — and lacking the substrate that would let them verify those decisions in advance of consequence. The fear half of CEOs are carrying is empirically appropriate. It is what the constraint moving has produced.
What Continuous Documentation Actually Provides
This is the structural function of continuous independent documentation, and it is the function the 2004 paper inadvertently demonstrates by being twenty-two years old.
The argument the paper made in 2004 was a prediction. The conditions that would test the prediction did not yet exist. There was no way, in 2004, to know whether the structural claim about process integrity overtaking technology capability would hold across the next two decades. The paper could only state the claim and let time test it.
Twenty-two years later, the conditions exist. The claim has been tested across the seven technology eras the franchise’s archive documents — from mainframe ERP through Agentic AI. The pattern that the paper predicted — technology capability rising while outcome variables move to a different layer — has held across every era. The claim is no longer prediction. It is documented historical pattern.
This is what continuous documentation produces that no static analysis can. A 2004 prediction reviewed in 2026 is either validated or refuted by the intervening twenty-two years of evidence.
The Procurement Insights™ archive has been operating in this register since 2007, with the methodological foundation reaching back to the 1998 RAM deployment with the Department of National Defence — twenty-seven years of continuously published, contemporaneously documented practitioner observation across every technology era the field has produced.
The substrate is not the analytical capability. The substrate is the temporal record that makes the analytical capability testable.
This is what the May 6 post argued executives now need and most enterprise architectures do not have. The 2004 paper post is the demonstration of what the May 6 post claimed. The validation discipline the franchise positions is not an asserted methodology. It is what twenty-two years of continuously documented structural argument actually produces when the present moment finally tests it.
The Question the Discourse Is Now Forced to Engage
The question the title raises has a sharper version. If the structural argument has been making itself for twenty-two years across seven technology eras, and if the data is now visualizing the consequence in the form of executive accountability windows collapsing inside shrinking tenure windows, what is the discourse going to do about it?
The discourse has options. It can continue to consume readiness frameworks and capability narratives — the AI-ready surveys, the maturity scales, the deployment acceleration prescriptions. This is the path the May 6 post described as the structural exit the discourse is refusing to acknowledge — more readiness as the answer to a question that is not about readiness. The path is comfortable because it does not require the discourse to engage the variable the 2004 paper named.
Or the discourse can engage the constraint where it actually moved to. This means treating validation discipline not as a feature the next platform will eventually include, but as the structural layer the next twenty years of executive decision-making depends on. It means treating continuously documented historical pattern not as an analytical curiosity, but as the substrate that converts AI commitment decisions from un-assessable bets to assessable propositions. It means asking, before scale, what the commitment is being tested against and whether the test is empirically grounded or rhetorically asserted.
The 2004 paper made an argument the discourse was not yet ready to engage. The 2026 charts show what the unengaged argument produced — thirty years of independent technology capability gain and flat outcome performance, with consequences now landing inside the executive’s own time in role. The question is no longer whether the argument was right. The question is whether the discourse, having now had its consequences made visually unmistakable, can absorb what the structural argument has been showing all along.
The answer to that question is not in the franchise’s hands. It is in the hands of the executives, advisors, analysts, and institutions that have been operating inside the capability-narrative frame and now have to decide whether the visible compression of time-to-consequence is sufficient evidence to engage the validation discipline that the 2004 paper began naming.
What the franchise can offer is what twenty-two years of continuous documentation has produced — an archive, a set of frameworks, and a substrate that makes the verification possible. Whether the discourse engages it now or twenty more years from now, the substrate will continue to compound. Each additional year of contemporaneous documentation strengthens what the substrate can verify and sharpens what the analytical voice can assert.
The 2004 paper was published before the conditions that would test it existed. Those conditions have now arrived. The paper’s title — Technology’s Diminishing Role In An Emerging Process-Driven World — may be more relevant in 2026 than it was in 2004.
That is the question worth sitting with.
“Continuous Strands of Accuracy” — Jon Hansen
Hansen Models™ · Phase 0™ · Hansen Fit Score™ (HFS™) · ARA™ · RAM 2025™ · Human Language Interface™ · Learning Loopback Process™ · Hansen Strand Commonality™ · Implementation Physics™
Founder: Jon W. Hansen — hansenprocurement.com — procureinsights.com
-30-
When the Constraint Moved: A 2004 Paper, Three Charts, and the Question the Discourse Has Been Avoiding
Posted on May 7, 2026
0
Jon W. Hansen — Hansen Models™ · Procurement Insights™ — May 2026
In 2004, I published a paper titled Technology’s Diminishing Role In An Emerging Process-Driven World. The paper argued that as enterprise technology capability continued to advance, the variables that actually determined whether technology investments produced operational value were shifting — away from the technology itself, and toward the process integrity, organizational readiness, and operating conditions that surrounded the deployment.
The paper was largely ignored.
It was published into a market that was buying ERP optimism and accelerating SaaS enthusiasm. AI was not yet in enterprise consciousness. Process-centric thinking existed but mostly at the operational level, not the strategic one. Arguing that the limiting variable was moving away from technology capability ran against everything the contemporaneous discourse was rewarding.
Twenty-two years later, the data has caught up.
This post is not about the paper being right. It is about what twenty-two years of technology-era progression has now made operationally visible — and about whether the discourse is finally able to engage what the structural argument has been documenting since before AI was a category.
What the 2004 Paper Actually Argued
The paper’s central claim was that enterprise technology capability had reached a level where additional capability advances were producing diminishing operational returns, because the limiting variable was no longer in the technology layer itself. The variable was in the process layer that surrounded technology — the alignment between the technology’s design assumptions and the operating conditions the technology would actually encounter.
The paper made this argument before the conditions that would test it arrived. ERP was still being measured by feature parity. SaaS was being measured by subscription growth. The cloud era had not yet begun to challenge the assumption that the next platform would solve what the previous platform had not.
The structural claim that process integrity was overtaking technology capability as the dominant variable was, in 2004, ahead of where the market was prepared to engage.
What the paper did not predict — and what no analyst in 2004 could have predicted — was the specific mechanism through which the structural claim would eventually become operationally visible. That mechanism is the AI deployment dynamic the May 6 post documented.
AI is the first technology era in which the consequences of un-validated commitments arrive in months rather than years, inside the executive’s tenure window rather than after it. The compression of time-to-consequence is what finally forces the discourse to engage what the 2004 paper named.
What the Three Charts Now Show
The structural argument the 2004 paper made is now visualizable in a way it was not at the time. Three charts, each adding a layer of structural evidence, show what twenty-two years of independent technology and outcomes data have produced.
The three-line chart is the simplest visualization of the 2004 thesis. The blue line tracks technology capability across thirty years and seven technology eras. The gold line tracks initiative success rate across the same period. The two lines have moved independently. Technology capability has risen from roughly 15% to nearly 100%. The 65–80% failure rate has remained constant across all seven technology eras the franchise has documented — from mainframe ERP and client-server systems through bolt-on best-of-breed, SaaS, cloud, AI-augmented automation, and Agentic AI.
This is the empirical floor of the 2004 thesis. If technology capability were the dominant variable, the success rate line should have moved with it. The success rate line did not move. The variable that determined success was somewhere else — in the layer the 2004 paper identified as process integrity.
The four-line chart adds two structural variables that the 2004 paper could not have foreseen but that operationalize its consequence. The red line is the executive accountability window — the time between an un-validated technology commitment and the moment its operational consequences become visible. In 1995, that window was approximately 36 months. In 2025, it is approximately 2 months.
The dashed purple line is average CEO tenure. In 1995, CEO tenure was approximately 9.5 years. In 2025, it is approximately 6.5 years.
The structural significance of these two lines moving simultaneously is this: the consequences of un-validated commitments now land inside the tenure window of the executive who made them. The cushion that previous technology eras provided — the gap between commitment and consequence that allowed for course correction, reassignment, or graceful exit — no longer exists.
The executive who commits to the AI deployment is the executive who will face whatever the deployment produces.
The five-line chart adds a second tenure line — the average tenure of the non-CEO C-Suite (CFO, CIO, CMO, CHRO). That line sits two to three years below the CEO line throughout the period, ending at approximately 4.4 years in 2025. The team the CEO is leading through AI commitment decisions has, on average, less time before the consequences land inside their windows than the CEO has before they land inside theirs.
The argument the chart progression makes, taken together, is structurally what the 2004 paper predicted in operational form. As technology capability advanced, the variables that determined outcome would shift to the process and organizational layers. The chart shows the shift. The chart also shows what the 2004 paper could not predict — that the consequences of failure in those layers would eventually arrive on a timeline that put the executive’s career inside the consequence window.
The Structural Pivot
The line that names this most precisely came from a multimodel reading of the chart progression and the 2004 paper context, conducted as part of the franchise’s ARA™ RAM 2025™ multimodel validation methodology. The line is this:
This is the structural pivot the discourse has been refusing to make. The capability narrative — the next platform will solve what the previous platform did not — has been the dominant frame for thirty years because the discourse has been buying capability rather than diagnosing constraint. As long as the next platform was framed as the answer, the question of where the actual constraint lived did not need to be engaged.
The constraint moved a long time ago. The 2004 paper named the move when it was happening. The chart now visualizes where it moved to and what it produces.
In the AI era, an un-validated commitment can now destroy a career, a balance sheet, or an entire vendor relationship — inside a single executive tenure, sometimes inside a single quarter.
Where it moved to is the process layer — the layer that determines whether technology commitments are tested against documented historical pattern before they scale, whether organizational conditions can hold what the technology requires of them, whether the substrate exists for executives to verify their decisions before the consequences arrive to test them.
Phase 0™, ARA™, and RAM 2025™ multimodel validation methodology were built to operate inside this layer — to treat validation as a standalone discipline rather than as a feature the next platform will eventually include.
What it produces is the dynamic the May 6 post documented. Senior executives making AI commitment decisions under simultaneous career exposure, audit exposure, vendor decision exposure, project delivery exposure, and financial return exposure — and lacking the substrate that would let them verify those decisions in advance of consequence. The fear half of CEOs are carrying is empirically appropriate. It is what the constraint moving has produced.
What Continuous Documentation Actually Provides
This is the structural function of continuous independent documentation, and it is the function the 2004 paper inadvertently demonstrates by being twenty-two years old.
The argument the paper made in 2004 was a prediction. The conditions that would test the prediction did not yet exist. There was no way, in 2004, to know whether the structural claim about process integrity overtaking technology capability would hold across the next two decades. The paper could only state the claim and let time test it.
Twenty-two years later, the conditions exist. The claim has been tested across the seven technology eras the franchise’s archive documents — from mainframe ERP through Agentic AI. The pattern that the paper predicted — technology capability rising while outcome variables move to a different layer — has held across every era. The claim is no longer prediction. It is documented historical pattern.
This is what continuous documentation produces that no static analysis can. A 2004 prediction reviewed in 2026 is either validated or refuted by the intervening twenty-two years of evidence.
The Procurement Insights™ archive has been operating in this register since 2007, with the methodological foundation reaching back to the 1998 RAM deployment with the Department of National Defence — twenty-seven years of continuously published, contemporaneously documented practitioner observation across every technology era the field has produced.
The substrate is not the analytical capability. The substrate is the temporal record that makes the analytical capability testable.
This is what the May 6 post argued executives now need and most enterprise architectures do not have. The 2004 paper post is the demonstration of what the May 6 post claimed. The validation discipline the franchise positions is not an asserted methodology. It is what twenty-two years of continuously documented structural argument actually produces when the present moment finally tests it.
The Question the Discourse Is Now Forced to Engage
The question the title raises has a sharper version. If the structural argument has been making itself for twenty-two years across seven technology eras, and if the data is now visualizing the consequence in the form of executive accountability windows collapsing inside shrinking tenure windows, what is the discourse going to do about it?
The discourse has options. It can continue to consume readiness frameworks and capability narratives — the AI-ready surveys, the maturity scales, the deployment acceleration prescriptions. This is the path the May 6 post described as the structural exit the discourse is refusing to acknowledge — more readiness as the answer to a question that is not about readiness. The path is comfortable because it does not require the discourse to engage the variable the 2004 paper named.
Or the discourse can engage the constraint where it actually moved to. This means treating validation discipline not as a feature the next platform will eventually include, but as the structural layer the next twenty years of executive decision-making depends on. It means treating continuously documented historical pattern not as an analytical curiosity, but as the substrate that converts AI commitment decisions from un-assessable bets to assessable propositions. It means asking, before scale, what the commitment is being tested against and whether the test is empirically grounded or rhetorically asserted.
The 2004 paper made an argument the discourse was not yet ready to engage. The 2026 charts show what the unengaged argument produced — thirty years of independent technology capability gain and flat outcome performance, with consequences now landing inside the executive’s own time in role. The question is no longer whether the argument was right. The question is whether the discourse, having now had its consequences made visually unmistakable, can absorb what the structural argument has been showing all along.
The answer to that question is not in the franchise’s hands. It is in the hands of the executives, advisors, analysts, and institutions that have been operating inside the capability-narrative frame and now have to decide whether the visible compression of time-to-consequence is sufficient evidence to engage the validation discipline that the 2004 paper began naming.
What the franchise can offer is what twenty-two years of continuous documentation has produced — an archive, a set of frameworks, and a substrate that makes the verification possible. Whether the discourse engages it now or twenty more years from now, the substrate will continue to compound. Each additional year of contemporaneous documentation strengthens what the substrate can verify and sharpens what the analytical voice can assert.
The 2004 paper was published before the conditions that would test it existed. Those conditions have now arrived. The paper’s title — Technology’s Diminishing Role In An Emerging Process-Driven World — may be more relevant in 2026 than it was in 2004.
That is the question worth sitting with.
“Continuous Strands of Accuracy” — Jon Hansen
Hansen Models™ · Phase 0™ · Hansen Fit Score™ (HFS™) · ARA™ · RAM 2025™ · Human Language Interface™ · Learning Loopback Process™ · Hansen Strand Commonality™ · Implementation Physics™
Founder: Jon W. Hansen — hansenprocurement.com — procureinsights.com
-30-
Share this:
Related