Has Gartner Ever Explained Who “THEY” Are?

Posted on January 8, 2026

0


By Jon Hansen | Procurement Insights | January 8, 2026


A Note on the Graph

The chart above is a pattern index, not a forensic econometric dataset. The curves are normalized to 1995 = 100 and represent directional trajectories based on 30 years of documented implementations, market data, and archived case studies — not a single longitudinal study.

Here’s what each line represents:

Gartner Prediction Volume tracks the explosion of technology forecasts, frameworks, Hype Cycles, and taxonomies published annually. This growth is undisputed and publicly documented.

Vendor & Consulting Revenue follows technology and consulting industry revenue over the same period. Gartner’s own revenue grew from negligible figures in the mid-1990s to over $6.2 billion in 2024. The broader ecosystem grew in parallel.

Implementation Failure Costs reflects the persistent 60–80% failure rate documented across ERP, e-procurement, digital transformation, and now AI initiatives. Panorama Consulting’s 2025 ERP Report cites 55–75% failure rates; Gartner itself predicts over 70% of ERP initiatives will fail to meet original business goals by 2027.

Prediction Accuracy Rate is the uncomfortable line. It measures not whether predictions sound plausible at publication, but whether they translate into sustained, post-implementation alignment with outcomes. By that measure, accuracy has not improved in proportion to volume, influence, or cost. The industry produces more predictions than ever — but the failure rate hasn’t moved.

The insight isn’t that Gartner is always wrong. It’s that accuracy has not improved in proportion to volume, influence, or cost — while both vendor revenue and client failure costs rise in parallel.

That’s the Doom Loop of Anonymity: the “they” economy grows, the “you” outcomes don’t, and no one is structurally accountable for the gap.

The Predictions

This week, Gartner published their 2026 predictions:

“By 2028, agentic AI will autonomously make at least 15% of day-to-day work decisions across all industries.”

“By 2028, 90% of B2B buying will be AI agent intermediated, pushing over $15 trillion of B2B spend through AI agent exchanges.”

“By 2026, 40% of enterprise applications will include task-specific AI agents.”

Bold forecasts. Inevitable-sounding futures. The kind of statements that get quoted in board presentations and used to justify eight-figure technology investments.

But read them again. Slowly.


The Grammar of Evasion

Notice what’s missing?

Every prediction is written in passive voice:

  • Decisions “will be made” autonomously
  • Buying “will be” intermediated
  • Applications “will include” AI agents

Who is making these decisions? Who is intermediating these purchases? Who is including these agents?

The sentences have no subject. No actor. No one responsible.

This isn’t accidental. It’s structural.


The Missing Subject

In Gartner’s world, technology happens to organizations. AI agents arrive like weather — inevitable, external, beyond anyone’s control. The role of leadership is to prepare for impact, not to decide whether impact is appropriate.

But someone is making these decisions. Someone is:

  • Choosing to deploy AI agents
  • Deciding which decisions to automate
  • Determining how much autonomy to grant
  • Accepting accountability when it fails

Who?

Not practitioners. They don’t appear in Gartner’s predictions at all — except implicitly, as the surface technology lands on. The “where,” not the “who.”

Not executives, specifically. They’re implied but never named, never held accountable in the sentence structure itself.

The subject is a ghostly “they” — the market, the industry, the inevitable march of progress.


The Convenient Disappearance

Here’s why this matters:

When predictions are made in passive voice, no one is accountable for the prediction.

If Gartner said, “Executives will deploy AI agents that autonomously make 15% of day-to-day decisions,” that creates accountability. Executives become the subject. Their judgment is on the line.

If Gartner said, “Organizations that assess readiness before deployment will achieve AI-driven efficiency gains,” that creates a condition. Success becomes contingent on something measurable.

But “decisions will be made autonomously”? That’s a weather forecast. No one is responsible for the weather.


The Other Prediction

Gartner’s passive voice predictions come with an interesting companion:

“Over 40% of agentic AI projects will be canceled by end of 2027 due to escalating costs, unclear business value, or inadequate risk controls.”

40% cancellation rate. But canceled by whom? Due to decisions made by whom?

Again, no subject.

And when those projects fail — when 40% of initiatives are canceled, when costs escalate, when business value remains unclear — who gets blamed?

Gartner already answered that question in 2024:

“Your Failing GenAI Initiative Is Your Fault.”

Suddenly, the passive voice disappears. Suddenly, there’s a subject in the sentence: You. The practitioner. The one who implemented what “they” predicted would happen.


The Gaslighting Cycle

Here’s the pattern:

Step 1: Prediction (Passive Voice) “AI agents will autonomously make 15% of decisions.” No subject. No accountability. Just inevitability.

Step 2: Implementation (Implied Mandate) Organizations deploy because Gartner said it would happen. Boards approve because “they” say this is the future. Practitioners implement because that’s their job.

Step 3: Failure (Active Voice) 40% of projects canceled. Costs escalate. Value unclear. And now: “Your failing initiative is YOUR fault.”

The passive voice protected the prediction. The active voice blames the practitioner.

“They” made the forecast. You took the fall.


Who Are “They”?

I’ve been documenting procurement technology implementations for 18 years. In that time, I’ve watched the same pattern repeat:

  • Technology arrives with passive-voice inevitability
  • Organizations implement without assessing readiness
  • Projects fail at 60-80% rates
  • Practitioners absorb the blame
  • Consultants sell remediation
  • The next wave arrives

And through it all, no one asks the simple question:

Who decided this was a good idea?

Not “what technology should we deploy?” — that question gets endless attention.

Who assessed whether we were ready to deploy it?

That question has no owner. Because in Gartner’s grammar, readiness isn’t a decision someone makes. It’s a gap someone falls into.


The Readiness Gap in a Single Grammatical Choice

When you write “decisions will be made autonomously,” you’re not just predicting the future. You’re erasing the present — the present where someone has to decide whether to pursue autonomy, whether the organization can absorb it, whether practitioners are prepared to work alongside it.

That present doesn’t appear in Gartner’s predictions because it doesn’t serve Gartner’s business model.

Gartner sells technology guidance. Technology guidance requires technology inevitability. If organizations could choose not to deploy — if readiness assessment could gate deployment — the guidance becomes contingent, not essential.

So the grammar removes the choice. “Will be made.” “Will be intermediated.” “Will include.”

The future is certain. Your readiness is your problem.


What Happens to the Practitioners?

I wrote today about what happens when the passive voice meets organizational reality:

A practitioner in 2008 called me after his consulting engagement ended. His team was in a “what’s next” holding pattern. The consultants had delivered and left. No knowledge transfer. No capability building. No plan for what came after.

Seventeen years later, in 2025, I got a similar call. Different practitioner. Different technology. Same abandonment.

The consultants delivered what “they” predicted would happen. When it didn’t work as promised, the practitioners were left holding the bag.

They’re not in Gartner’s predictions. But they’re in every failed implementation.


The Question No One Asks

Has Gartner ever explained who “they” are?

Has any analyst firm ever named the subject of their passive-voice predictions?

Has anyone ever said: “Executives who deploy AI agents without assessing organizational readiness will experience a 40% cancellation rate”?

That sentence has a subject. It has a condition. It has accountability.

It’s also a sentence Gartner will never write — because it shifts responsibility from the practitioner who implemented to the executive who decided.


A Different Grammar

What would it look like if predictions were written in active voice?

Passive (Gartner): “15% of decisions will be made autonomously by 2028.”

Active (Accountable): “Executives who deploy AI agents after assessing organizational readiness will achieve autonomous decision-making in targeted workflows.”

Passive (Gartner): “40% of agentic AI projects will be canceled.”

Active (Accountable): “Organizations that skip readiness assessment will cancel 40% of their AI initiatives due to predictable governance and integration failures.”

Same data. Different grammar. Completely different accountability.


The Bottom Line

Gartner’s predictions aren’t wrong. They’re incomplete by design.

The passive voice isn’t lazy writing. It’s structural protection — for the analysts who make predictions, for the executives who fund implementations, for the consultants who deliver them.

The only people not protected? The practitioners who implement what “they” said would happen, and absorb the blame when it doesn’t.

Next time you read a Gartner prediction, ask yourself:

Who is the subject of this sentence?

If you can’t find one, you’ve found the problem.


I wrote about what happens to practitioners when the passive voice meets organizational reality:


Jon Hansen has been documenting procurement transformation patterns since 2007. His methodology — the Hansen Fit Score — puts practitioners back in the sentence by assessing organizational readiness before technology deployment. Because someone should ask “who” before “what.” As a result, he knows who “they are” and shares actual details rather than vague generalities.

-30-

Posted in: Commentary