KPMG’s AI Prize Program Is Not an Innovation Strategy. It’s an Organizational Readiness Problem.

Posted on March 12, 2026

0


When a Big Four firm needs cash prizes to get its own consultants to adopt AI, the capability gap isn’t the issue — the readiness gap is.

Jon Hansen | Procurement Insights | March 2026


KPMG announced this week that it is launching an “AI Spark Innovation” awards program — offering cash prizes to consultants who develop ideas that enhance client value or operational efficiency using AI. According to Business Insider, the prizes would in most cases be “materially larger than an end-of-year variable compensation award.”

The announcement was framed as an innovation initiative. Read more carefully, it is something else entirely.

“We’re trying to figure out how we get all that grassroots innovation unlocked by trying to bring some more carrots forward to our folks.” — Rob Fisher, KPMG US Vice Chair of Advisory

That is not the language of an organization executing a strategy. That is the language of an organization that has the technology and does not know how to make its people use it productively.


What KPMG Is Actually Describing

KPMG is one of the largest consulting firms in the world. It has the resources to deploy AI tools across its entire practice. It has the vendor relationships, the training programs, the internal mandates. And yet — it needs to offer prize money materially larger than annual bonuses to get its own consultants to innovate with those tools.

That is an organizational readiness problem. Not an AI problem.

The capability exists. The platform works. The gap is between what the technology can do and what the organization can actually absorb, apply, and act on intelligently. That gap has a name. It is the same gap that has driven an 80% implementation failure rate in enterprise technology for the better part of three decades. And cash prizes do not close it.

If AI adoption were delivering obvious value, the behavior would be self-reinforcing. You would not need to incentivize it with prizes larger than end-of-year compensation.


Rob Fisher Named the Real Problem — Without Realizing It

Fisher told Business Insider that measuring AI utilization is important, but “doesn’t tell you much about the quality of the work.”

That is a significant admission. It means KPMG has been tracking the wrong metric. Utilization — how often consultants open the tool, how many prompts they run, how many hours are logged — is a deployment measure. It tells you nothing about whether the AI is being used to produce better client outcomes. Nothing about whether the organizational conditions exist to act on what the AI surfaces. Nothing about whether the people using it are equipped to interpret, challenge, or apply the outputs intelligently.

This is precisely the distinction the Hansen Fit Score™ is built on: the gap between technology capability and the organizational readiness to convert that capability into outcomes. Gartner doesn’t measure it. Forrester doesn’t measure it. And apparently KPMG, despite its considerable resources, is just now discovering that utilization metrics don’t capture it either.


The IBM Connection: Two Data Points, One Diagnosis

On the same day KPMG’s announcement surfaced, IBM was promoting watsonx.data on LinkedIn — positioning AI agents that retrieve context-rich answers from governed, connected enterprise data, with “no replatforming required.”

IBM is solving a real technical problem: how to ground AI in governed enterprise data. That is legitimate and necessary.

But as I noted in response to that post, it still leaves the more important question unanswered. Once the answer is retrieved, is the organization ready to act on it intelligently?

Fisher’s comment answers that question from the inside of one of the world’s largest consulting firms: no. They are not. Not yet. And they are using prize money to try to get there.

These are not two separate stories. They are the same story told from opposite ends of the same gap:

  • IBM is solving for better data retrieval at the input layer
  • KPMG is trying to incentivize quality AI usage at the output layer
  • Neither is addressing whether the organization in between is ready to convert AI outputs into intelligent, outcome-oriented decisions

That middle layer — the readiness layer — is what Phase 0™ diagnoses. It is the only layer that determines whether the investment in the other two produces results or produces, as I noted in the IBM thread, a faster wrong answer.

Better retrieval from a constrained decision environment doesn’t fix the issue — it just produces a faster wrong answer.

Two Fortune-level organizations, on the same day, independently confirming the gap exists. Neither named it. Both pointed directly at it.


The Phase 0™ Irony

KPMG implements enterprise technology for clients. It is one of the firms that walks into organizations and advises them on AI readiness, technology selection, and transformation strategy. It charges significant fees for that guidance.

It is now publicly demonstrating that it cannot get its own consultants to adopt AI without a behavioral incentive structure.

This is not a criticism of KPMG specifically. It is an illustration of a structural truth: the readiness gap doesn’t respect organizational size or resources. It operates inside Big Four firms the same way it operates inside the clients those firms serve. The difference is that KPMG’s clients don’t have the option of announcing a prize program and calling it a strategy.

Phase 0™ thinking — done correctly — diagnoses the behavioral and structural conditions that determine whether technology produces outcomes. A prize program is not a diagnostic. It is a workaround for the absence of one.


What the Practitioner Base Should Take From This

If you are a CPO, a CIO, or a senior procurement leader watching this announcement, here is the question worth sitting with: if KPMG — with its AI partnerships, internal training infrastructure, and dedicated transformation practice — is still figuring out how to close the readiness gap internally, what does that tell you about the guidance it is providing to clients on the same question?

It does not mean the guidance is wrong. It means the readiness question is harder than the capability question, and that most of the industry — including its largest consulting firms — is still treating it as a secondary concern rather than the primary one.

The evidence on this point has been consistent for 27 years. Technology implementations fail not because the platforms don’t work, but because the organizational conditions required to extract value from them are not in place before deployment begins. KPMG’s prize program is the latest data point in that pattern. It will not be the last.


The Diagnostic Question No One Is Asking

Before an organization deploys AI — or any enterprise technology — the questions that determine outcomes are not about the platform. They are about the organization.

Is there a governed, outcome-oriented framework for how AI outputs will be interpreted and acted on? Are the people using the tools equipped to challenge a confident wrong answer? Does the data environment the AI is drawing from reflect actual organizational truth — or a narrowed, artificial version of it?

These are Phase 0™ questions. They need to be answered before the system goes live, not after the prize program launches and the utilization dashboard shows green.

KPMG has the resources to ask them. The question is whether it has the structural incentive to do so — or whether the business model, like the prize program, is optimized for something other than the answer.


Jon Hansen is the Founder of Hansen Models™ and creator of the Hansen Fit Score™, Phase 0™, and RAM 2025™. He has been independently documenting procurement technology patterns through Procurement Insights since 2007.

Hansen Fit Score™ Vendor Assessment Series | procureinsights.com | Hansen Models™ (1001279896 Ontario Inc.)

-30-

BONUS SECTION: A Question From 2008 That Has Never Been Answered


In 2008, writing the SAP Procurement for Public Sector White Paper for the CATA Alliance, I documented one of the most revealing case studies in enterprise technology history.

Hewlett-Packard — a company actively positioning itself as a premier SAP integration services provider, marketing its ability to implement the same complex systems it was selling — had just lost $400 million in revenue from its own failed SAP rollout.

RedMonk analyst James Governor captured the irony precisely: “HP is trying to build an application management business to rival IBM’s. What better case study in proving your R/3 and Netweaver capability…” He then asked the question that should have echoed across the industry for decades: “Who would go to HP now for large scale SAP integration? The CEO just publicly said HP can’t effectively manage such a project.”

My own observation from that white paper was measured, but pointed:

“This being the case, if a high technology company who has extensive experience with the product can’t succeed, what does this say in terms of any organization’s chances for success?”

That question was never satisfactorily answered. It was absorbed, forgotten, and filed under “implementation challenges.”


Seventeen years later, it deserves to be asked again — loudly.

The same firms that were implementation partners on the SAP rollouts documented in that 2008 white paper — Hershey ($112M failure), FoxMeyer (bankruptcy), HP ($400M loss), Cadbury (£12M hit), King County ($38M that “blew up”) — are now the firms acquiring 100+ AI companies and selling AI transformation services to the same enterprise market.

Their pitch has changed. The technology has changed. The cost of failure has escalated from hundreds of millions to billions.

What has not changed: no major consultancy has publicly demonstrated that their own internal AI readiness meets the standard they are selling to clients.

The HP parallel is not historical curiosity. It is a live structural warning.

If the advisors cannot prove it on themselves, the question from 2008 applies with full force in 2025:

What does this say in terms of any organization’s chances for success?


Source: SAP Procurement for Public Sector White Paper, Jon W. Hansen, CATA Alliance, 2008. Available – by request at procureinsights.com.

The Hansen Fit Score™ vendor assessment methodology was developed in part to create the independent, longitudinal evidentiary standard that vendor-influenced analysis cannot provide. Learn more at Hansen Models™.

Posted in: Commentary