“Imagine buying a new Corvette, driving it into a lake, then blaming Chevy because it won’t float.
Replace Corvette / Chevy with LLM / any model provider. That is exactly what is happening across the industry.
People are deploying projects with technology they have not taken the time to truly understand.”
— Patrick Marlow, Google (2025)
In 2005, I met with the Senior Vice President at Duke Energy’s Charlotte head office.
He was excited about a new technology that would streamline the purchase of indirect materials — and he was promoting it. But quietly. From behind the scenes.
When I asked him why, his answer was one of the most candid things I have ever heard from a senior executive:
“I am retiring in the next couple of years. If I actively champion this from the front of the line and it fails, that will be my legacy at this company. If it succeeds, I won’t be around to benefit from it. The only way to protect my credibility is to champion it from the shadows — because whether it succeeds or fails won’t matter, because I will be retired.”
He was playing the long game. Strategically. Rationally. And with full awareness that technology initiatives in that era took years to succeed or fail — long enough for a carefully managed exit.
That calculus no longer holds.
In the AI era, failure comes faster. Much faster. And accountability arrives just as quickly.
For years, management doctrine has been built around one idea: fail fast.
Test. Learn. Adjust. Improve.
That worked when failure was contained — when outcomes took years to surface, and exposure could be managed over time.
That is no longer the environment we are operating in.
In the AI era, failing fast doesn’t reduce risk — it accelerates exposure.
Decisions scale quickly. Outcomes surface quickly. And accountability follows just as fast.
You don’t fail quietly anymore. You don’t fail slowly. And you don’t get distance between the decision and the consequence.
Which changes the question entirely.
Not the strategic one. Not the organizational one.
The personal one:
Are you prepared to gamble your future on AI outcomes?
Few people are ever moved by logic.
They are moved by fear.
Because here is the bottom line, without Phase 0™, your AI initiative is going to fail.
That is not a prediction. That is implementation physics — documented across a 27-year independent archive, with zero vendor sponsorships, through every major technology wave since 1998.
The Duke Energy SVP understood his exposure and managed it. He had the luxury of time.
You do not.
The only difference between his era and yours is this: as a C-Suite executive in 2026, you will be in the room when it happens.
Phase 0™ is not a framework. It is not a consulting engagement. It is the only instrument that tells you — before the commitment is made — whether what you are about to approve is going to scale capability or scale dysfunction.
The SVP in Charlotte protected his legacy by stepping back.
You protect yours by stepping forward — with the right diagnostic, at the right moment, before the wrong decision becomes your permanent record.
“Imagine buying a new Corvette, driving it into a lake, then blaming Chevy because it won’t float.
Replace Corvette / Chevy with LLM / any model provider. That is exactly what is happening across the industry.
People are deploying projects with technology they have not taken the time to truly understand.”
— Patrick Marlow, Google (2025)
Final Takeaway: If the C-Pieces are not aligned, failure isn’t just more likely — it’s structurally built in. And in the AI era, the executive who approved it doesn’t get to retire before the results come in.
Where does your organization sit right now? — The Phase 0™ Diagnostic
Jon Hansen is the founder of Hansen Models™ and Procurement Insights — 43 years, 3,300+ documents, zero vendor sponsorships.
© 2026 Procurement Insights. All rights reserved.
-30-
ProcureTech Implementation Success: With vs. Without Phase 0™
Likelihood of successful implementation across five solution providers — SAP Ariba, Coupa, Tealbook, ORO Labs, and ZIP — based on Hansen Models™ independent organizational readiness assessment. Not vendor-supplied data.
One final word — not mine.
Saurabh Mishra helped create the Stanford HAI AI Index. His career spans the World Bank, the IMF, the Bank for International Settlements, the Brookings Institution, the OECD’s Network of Experts on AI, and Sciences Po. He reached out to me.
Within twenty minutes, his reaction to Hansen Models™ was immediate:
“This is enticing. Brilliant. You have a better idea of what I’m doing than I do.”
He wasn’t responding to a pitch. He initiated the conversation. And the same truth surfaced that MIT, McKinsey, and Stanford HAI have each arrived at independently: the constraint is never the technology alone. It is the conditions, logic, and cross-boundary realities the technology has to operate within.
“There’s a bigger story here,” he said. “You have the pulse on the right part.”
— Saurabh Mishra, former Stanford HAI AI Index Leader
Are You Prepared to Gamble Your Career on AI Outcomes?
Posted on April 13, 2026
0
“Imagine buying a new Corvette, driving it into a lake, then blaming Chevy because it won’t float.
Replace Corvette / Chevy with LLM / any model provider. That is exactly what is happening across the industry.
People are deploying projects with technology they have not taken the time to truly understand.”
— Patrick Marlow, Google (2025)
In 2005, I met with the Senior Vice President at Duke Energy’s Charlotte head office.
He was excited about a new technology that would streamline the purchase of indirect materials — and he was promoting it. But quietly. From behind the scenes.
When I asked him why, his answer was one of the most candid things I have ever heard from a senior executive:
“I am retiring in the next couple of years. If I actively champion this from the front of the line and it fails, that will be my legacy at this company. If it succeeds, I won’t be around to benefit from it. The only way to protect my credibility is to champion it from the shadows — because whether it succeeds or fails won’t matter, because I will be retired.”
He was playing the long game. Strategically. Rationally. And with full awareness that technology initiatives in that era took years to succeed or fail — long enough for a carefully managed exit.
That calculus no longer holds.
In the AI era, failure comes faster. Much faster. And accountability arrives just as quickly.
For years, management doctrine has been built around one idea: fail fast.
Test. Learn. Adjust. Improve.
That worked when failure was contained — when outcomes took years to surface, and exposure could be managed over time.
That is no longer the environment we are operating in.
In the AI era, failing fast doesn’t reduce risk — it accelerates exposure.
Decisions scale quickly. Outcomes surface quickly. And accountability follows just as fast.
You don’t fail quietly anymore. You don’t fail slowly. And you don’t get distance between the decision and the consequence.
Which changes the question entirely.
Not the strategic one. Not the organizational one.
The personal one:
Are you prepared to gamble your future on AI outcomes?
Few people are ever moved by logic.
They are moved by fear.
Because here is the bottom line, without Phase 0™, your AI initiative is going to fail.
That is not a prediction. That is implementation physics — documented across a 27-year independent archive, with zero vendor sponsorships, through every major technology wave since 1998.
The Duke Energy SVP understood his exposure and managed it. He had the luxury of time.
You do not.
The only difference between his era and yours is this: as a C-Suite executive in 2026, you will be in the room when it happens.
Phase 0™ is not a framework. It is not a consulting engagement. It is the only instrument that tells you — before the commitment is made — whether what you are about to approve is going to scale capability or scale dysfunction.
The SVP in Charlotte protected his legacy by stepping back.
You protect yours by stepping forward — with the right diagnostic, at the right moment, before the wrong decision becomes your permanent record.
“Imagine buying a new Corvette, driving it into a lake, then blaming Chevy because it won’t float.
Replace Corvette / Chevy with LLM / any model provider. That is exactly what is happening across the industry.
People are deploying projects with technology they have not taken the time to truly understand.”
— Patrick Marlow, Google (2025)
Final Takeaway: If the C-Pieces are not aligned, failure isn’t just more likely — it’s structurally built in. And in the AI era, the executive who approved it doesn’t get to retire before the results come in.
Where does your organization sit right now? — The Phase 0™ Diagnostic
Jon Hansen is the founder of Hansen Models™ and Procurement Insights — 43 years, 3,300+ documents, zero vendor sponsorships.
© 2026 Procurement Insights. All rights reserved.
-30-
ProcureTech Implementation Success: With vs. Without Phase 0™
Likelihood of successful implementation across five solution providers — SAP Ariba, Coupa, Tealbook, ORO Labs, and ZIP — based on Hansen Models™ independent organizational readiness assessment. Not vendor-supplied data.
One final word — not mine.
Saurabh Mishra helped create the Stanford HAI AI Index. His career spans the World Bank, the IMF, the Bank for International Settlements, the Brookings Institution, the OECD’s Network of Experts on AI, and Sciences Po. He reached out to me.
Within twenty minutes, his reaction to Hansen Models™ was immediate:
“This is enticing. Brilliant. You have a better idea of what I’m doing than I do.”
He wasn’t responding to a pitch. He initiated the conversation. And the same truth surfaced that MIT, McKinsey, and Stanford HAI have each arrived at independently: the constraint is never the technology alone. It is the conditions, logic, and cross-boundary realities the technology has to operate within.
“There’s a bigger story here,” he said. “You have the pulse on the right part.”
— Saurabh Mishra, former Stanford HAI AI Index Leader
Share this:
Related