“Imagine buying a new Corvette, driving it into a lake, then blaming Chevy because it won’t float.
Replace Corvette / Chevy with LLM / any model provider. That is exactly what is happening across the industry.
People are deploying projects with technology they have not taken the time to truly understand.”
— Patrick Marlow, Google (2025)
In 2005, I met with the Senior Vice President at Duke Energy’s Charlotte head office.
He was excited about a new technology that would streamline the purchase of indirect materials — and he was promoting it. But quietly. From behind the scenes.
When I asked him why, his answer was one of the most candid things I have ever heard from a senior executive:
“I am retiring in the next couple of years. If I actively champion this from the front of the line and it fails, that will be my legacy at this company. If it succeeds, I won’t be around to benefit from it. The only way to protect my credibility is to champion it from the shadows — because whether it succeeds or fails won’t matter, because I will be retired.”
He was playing the long game. Strategically. Rationally. And with full awareness that technology initiatives in that era took years to succeed or fail — long enough for a carefully managed exit.
That calculus no longer holds.
In the AI era, failure comes faster. Much faster. And accountability arrives just as quickly.
For years, management doctrine has been built around one idea: fail fast.
Test. Learn. Adjust. Improve.
That worked when failure was contained — when outcomes took years to surface, and exposure could be managed over time.
That is no longer the environment we are operating in.
In the AI era, failing fast doesn’t reduce risk — it accelerates exposure.
Decisions scale quickly. Outcomes surface quickly. And accountability follows just as fast.
You don’t fail quietly anymore. You don’t fail slowly. And you don’t get distance between the decision and the consequence.
Which changes the question entirely.
Not the strategic one. Not the organizational one.
The personal one:
Are you prepared to gamble your future on AI outcomes?
Few people are ever moved by logic.
They are moved by fear.
Because here is the bottom line, without Phase 0™, your AI initiative is going to fail.
That is not a prediction. That is implementation physics — documented across a 27-year independent archive, with zero vendor sponsorships, through every major technology wave since 1998.
The Duke Energy SVP understood his exposure and managed it. He had the luxury of time.
You do not.
The only difference between his era and yours is this: as a C-Suite executive in 2026, you will be in the room when it happens.
Phase 0™ is not a framework. It is not a consulting engagement. It is the only instrument that tells you — before the commitment is made — whether what you are about to approve is going to scale capability or scale dysfunction.
The SVP in Charlotte protected his legacy by stepping back.
You protect yours by stepping forward — with the right diagnostic, at the right moment, before the wrong decision becomes your permanent record.
“Imagine buying a new Corvette, driving it into a lake, then blaming Chevy because it won’t float.
Replace Corvette / Chevy with LLM / any model provider. That is exactly what is happening across the industry.
People are deploying projects with technology they have not taken the time to truly understand.”
— Patrick Marlow, Google (2025)
Final Takeaway: If the C-Pieces are not aligned, failure isn’t just more likely — it’s structurally built in. And in the AI era, the executive who approved it doesn’t get to retire before the results come in.
Where does your organization sit right now? — The Phase 0™ Diagnostic
Jon Hansen is the founder of Hansen Models™ and Procurement Insights — 43 years, 3,300+ documents, zero vendor sponsorships.
© 2026 Procurement Insights. All rights reserved.
-30-
ProcureTech Implementation Success: With vs. Without Phase 0™
Likelihood of successful implementation across five solution providers — SAP Ariba, Coupa, Tealbook, ORO Labs, and ZIP — based on Hansen Models™ independent organizational readiness assessment. Not vendor-supplied data.
One final word — not mine.
Saurabh Mishra helped create the Stanford HAI AI Index. His career spans the World Bank, the IMF, the Bank for International Settlements, the Brookings Institution, the OECD’s Network of Experts on AI, and Sciences Po. He reached out to me.
Within twenty minutes, his reaction to Hansen Models™ was immediate:
“This is enticing. Brilliant. You have a better idea of what I’m doing than I do.”
He wasn’t responding to a pitch. He initiated the conversation. And the same truth surfaced that MIT, McKinsey, and Stanford HAI have each arrived at independently: the constraint is never the technology alone. It is the conditions, logic, and cross-boundary realities the technology has to operate within.
“There’s a bigger story here,” he said. “You have the pulse on the right part.”
— Saurabh Mishra, former Stanford HAI AI Index Leader

Holly S. Glennon
April 13, 2026
I love your analogy but, to be honest it’s a bit scary. Everyone seems to want to race toward AI as if it will solve their current business challenges. I can’t help but ask: what’s the rush? The stakes are too high to overlook the essential steps that can prevent failure.
We’ve witnessed the consequences of being unprepared that have led to years of unnecessary setbacks and failures. Why wouldn’t we take the time to do this right and at the right time; when ready? The outcomes are significant, and I don’t believe anyone has the extra time, resources and budget to deal with a preventable mess.
piblogger
April 13, 2026
Holly — I really appreciate this perspective.
You’re right — the concern isn’t just the pace, it’s the assumption that speed itself will solve the problem. In many cases, it does the opposite.
What we’ve seen across multiple technology waves is that organizations don’t fail because they moved too slowly or too quickly — they fail because they committed before they fully understood how their own systems actually operate under real conditions.
That’s where things tend to unravel.
The rush you’re pointing to often comes from a belief that the technology will resolve existing challenges. But if those challenges are rooted in decision structures, incentives, or process behavior, the technology doesn’t fix them — it scales them.
So I think your question — “what’s the rush?” — is exactly the right one.
The real opportunity isn’t slowing down for the sake of it. It’s taking a moment upfront to make sure the organization is ready to absorb what it’s about to introduce.
That’s usually the difference between progress and a very expensive reset.