The Gartner Quick Wins Framework: What the Evidence Shows — and What It Doesn’t

Posted on March 29, 2026

0


Nobody ever got fired for buying IBM.

That phrase shaped enterprise IT purchasing decisions for a generation. It was not a statement about IBM’s superiority. It was a fear-based shortcut — the organizational equivalent of “if something goes wrong, at least you chose the safe option.” It worked because it made the cost of choosing differently feel greater than the cost of choosing wrong.

When the Gartner for IT LinkedIn post appeared this week — citing 6.7 months to success with a transition plan versus 8.9 months without one, alongside a Project Prioritization Matrix urging new executives to identify Quick Wins immediately — something felt familiar.

This post is not a critique of Gartner’s analytical capability. It is a documentation exercise. We searched for independent, peer-reviewed, longitudinal evidence supporting the specific claims attached to that graphic. What follows is an accurate record of what we found — and what we did not find.


What the claim actually is

The 6.7 versus 8.9 months statistic is cited as “Gartner research.” When you follow the link it routes to their Executive FastStart™ product page — a paid service designed to help new executives accelerate their impact in the first 30, 60, and 90 days. The research behind the statistic is proprietary. The methodology is not disclosed. The sample size is not disclosed. The definition of “success” — whether it means tenure, stakeholder satisfaction, initiative completion, or sustained organizational outcome — is not disclosed.

We are not saying the statistic is wrong. We are saying we cannot verify it — and neither can you.


The “First 90 Days” framework — what the independent record shows

The 90-day executive transition framework traces primarily to Michael Watkins’ 2003 book published by Harvard Business Review Press. It is useful practitioner work. But it is a framework built on case studies and practitioner interviews — not a longitudinal controlled study. The 90-day number corresponds to one business quarter. It is a framing device.

The most significant independent research we found on quick wins specifically is “The Quick Wins Paradox,” published in Harvard Business Review in January 2009 by Mark Van Buren and Todd Safferstone of the Corporate Executive Board, based on a survey of 5,400 new leaders. Their finding: the most common mistake new executives make is focusing too heavily on quick wins. That intense focus causes them to lose sight of overall goals, alienate team members, and generate bigger losses down the road.

Separately, independent research on executive transitions consistently finds that 40 to 50 percent of senior outside hires fail to achieve desired results. If quick wins within 90 days were the primary determinant of success, that failure rate would correlate with the absence of quick wins. The research does not establish that correlation.


The three questions we could not answer

We looked specifically for answers to three questions the Gartner graphic raises but does not address.

First: what is the documented success rate of initiatives that were classified as Quick Wins or Major Projects using the 2×2 framework, measured not at completion but at sustained organizational outcome twelve to twenty-four months later? We did not find that data in any publicly available Gartner publication.

Second: what is the comparative success rate of initiatives that took longer than 90 days to initiate but were preceded by a structured organizational readiness assessment? The independent evidence on this question — from sources including McKinsey, Deloitte, and the Procurement Insights archive — consistently shows that pre-commitment diagnostic work improves sustained outcomes. But that research is rarely cited in frameworks that measure success by speed of initiative launch.

Third: does the 2×2 matrix account for process structural integrity and resulting governance architecture before classifying any initiative as a Quick Win? A Quick Win for an organization whose process structural integrity can sustain it is achievable. The same initiative in an organization without those conditions consumes resources, generates visible activity, and produces no sustained outcome — which then gets documented as a failed initiative in the next research cycle.

If the evidence exists, it should be visible.


What Phase 0™ asks — at whatever stage you are at

Phase 0™ is the organizational readiness diagnostic that applies at any stage of a technology or initiative commitment — before a decision is made, during implementation, or when current performance is not meeting expectations. It does not compete with prioritization frameworks. It determines whether any quadrant of the matrix is actually available to you given your current governance architecture, decision authority, and process integrity. The diagnostic question does not change across those stages. What changes is the window available to act on the answer.

An initiative that appears to be High Impact / Low Effort — a Quick Win — requires specific organizational conditions to actually deliver that profile. Without those conditions, the same initiative produces the opposite result: high effort, uncertain impact, and a leadership credibility deficit that makes the next initiative harder to execute.

The Hansen Fit Score™ then assesses whether the vendors and platforms being considered have demonstrated the behavioral alignment to perform under real-world organizational conditions — not in a controlled evaluation.

Neither instrument tells you which projects to choose. They tell you which projects are genuinely available to you given the organization you currently have.

The 1998 proof point — and what it actually shows

In 1998, a SR&ED-funded initiative at Canada’s Department of National Defence produced delivery performance that moved from 51% to 97.3% in 90 days — and held for seven consecutive years.

What made that possible was not the technology. The technology came later. What made it possible was a single governance question answered before anything was deployed.

The question was simple: what time of day do orders come in?

That question revealed the operational pattern the governance architecture needed to be designed around — when decisions actually had to be made, and whether the organization was structured to make them at that moment. Closing that gap, within existing technology limitations, produced 97.3% delivery performance.

RAM 1998 then scaled into that foundation. The algorithm evolved from a development-limited solution into a full production environment as the organizational readiness was already in place to receive it. The FTE reduction from 23 to 3 and the seven years of consecutive cost savings were the product of technology scaling into a prepared foundation — not technology creating that foundation.

That is the 90-day argument Gartner’s framework does not make. Not: deploy fast and demonstrate quick wins. But: answer the process structural integrity question first. The technology will follow. And when it does, it will hold.

Phase 0™ is that question — asked before the technology was deployed — applied as a formal diagnostic. It applies before a commitment is made, during implementation when conditions change, and when current performance is not meeting expectations. The question does not change. What changes is the window available to act on the answer.


Why this matters for CIOs, CFOs, and CPOs specifically

The 90-day pressure on new executives is real. The desire to demonstrate early results is rational. The frameworks that promise to accelerate that process are commercially appealing precisely because the anxiety they address is genuine.

But a CIO who launches a Quick Win initiative into an organization whose process structural integrity and resulting governance architecture cannot sustain it does not produce a Quick Win. They produce a visible initiative that stalls — and the post-mortem lands on the function closest to the implementation, not on the framework that classified the initiative as low effort.

The archive has documented that pattern across 18 years and seven technology eras. The failure rate has not moved. And the frameworks being recommended to prevent those failures — without first asking whether the organization can execute them — have remained consistent throughout.

We are not saying Gartner’s framework is wrong. We are saying it is incomplete without the Phase 0 readiness diagnostic that determines which quadrant you actually have access to.

That gap is what the three questions above are designed to surface.


The Procurement Insights archive contains 3,300+ published documents spanning 18 years of independently produced, timestamped research — zero vendor sponsorships, zero paid analyst relationships. Phase 0™ is the organizational readiness diagnostic that precedes all technology and initiative commitments. The Hansen Fit Score™ identifies the conditions under which outcomes have consistently succeeded or failed.


Right now, you are being measured on 90-day optics using tools that were never validated on 12 to 24-month outcomes. That is the gap the three questions above are designed to surface.

Before you classify your next initiative as a Quick Win, determine whether your organization can actually deliver it under real conditions.

If you are a CIO, CFO, or CPO navigating the 90-day pressure right now — whether you are selecting a platform, mid-implementation, or already questioning why outcomes are not matching expectations — this is the moment to find out where the readiness gap actually sits.

Book a 30-Minute Readiness Conversation with Jon Hansen — a preliminary diagnostic discussion with no sales pitch. Just an honest conversation about where your organization sits on the readiness spectrum.

Book your 30-Minute Readiness Conversation

hansenprocurement.com

-30-

Posted in: Commentary