For the First Time, the Practitioner Leads the Technology (Not the Other Way Around)

Posted on January 23, 2026

0


By Jon W. Hansen | Procurement Insights


For 30 years, the industry has followed the same pattern:

  1. Technology capability emerges
  2. Analysts describe the maturity curve
  3. Practitioners are told to adopt or get left behind
  4. Organizations scramble to implement
  5. 70-80% fail
  6. Consultants sell remediation

The practitioner is always reactive — responding to technology roadmaps defined by vendors, validated by analysts, and imposed by executives who read the Gartner report.

That sequence just got inverted.


The Everest Problem

Everest Group just published their Procurement Outsourcing State of the Market 2025 report, complete with the standard AI maturity graphic:

Traditional AI → Generative AI → Agentic AI

The arrow points right. The message is clear: more AI autonomy, less human input.

The Agentic AI column promises:

  • “Tasks executed with minimal human input”
  • “Collaboration among multiple agents”
  • “Handling of exceptions beyond pre-defined rules”

It’s an elegant progression. It’s also missing something fundamental.

Who governs the agent ecosystem?


The Question No One Is Asking

“Minimal human input” on transactions isn’t the same as “no human governance.”

When Everest talks about “handling exceptions beyond pre-defined rules,” the implicit assumption is that the AI figures it out. But someone has to decide:

  • Which agents are active in the first place
  • What capabilities those agents have access to
  • What thresholds trigger autonomous action versus human review
  • When new conditions warrant new agents — or temporary activation of circumstantial ones

That’s not minimal human input. That’s repositioned human input.

The human moves up the stack, not out of the picture.


What Actually Changes

The difference isn’t whether humans are involved. The difference is where humans are involved.

In the old model, the practitioner waits for Gartner to announce the next phase of AI maturity, then scrambles to implement.

In the new model, the practitioner defines the governance structures that determine what “next” even means for their organization.


The Missing Layer

Every framework I’ve reviewed — Gartner’s Agentic AI evolution, Everest’s maturity progression, the enterprise AI adoption frameworks circulating on LinkedIn — describes the engine.

None of them describe the steering wheel.

The steering wheel is governance at the agent pool level:

  • Core permanent agents — always active, foundational to every transaction
  • Supplementary permanent agents — always active, enabled by data streams that didn’t exist a decade ago
  • Circumstantial agents — activated when real-world conditions warrant, deactivated when they normalize

A Ukraine conflict agent. A California wildfire agent. A port strike agent. These aren’t permanent fixtures — they’re responses to specific, time-bounded conditions that affect your supply chain right now.

The human doesn’t approve every transaction touched by geopolitical risk. The human decides whether the geopolitical risk agent should be active.

That’s a fundamentally different kind of governance. And it’s what Phase 0 needs to assess before any Agentic AI deployment.


Why This Matters

If you follow the standard maturity curve — Traditional AI to Generative AI to Agentic AI — you’re letting the technology define the timeline.

If you govern the agent ecosystem, you’re letting organizational readiness define the timeline.

The first approach has a 70-80% failure rate.

The second approach puts the practitioner in control.

For the first time, a practitioner-led methodology exists to operationalize that second approach.


What’s Next

The full framework — agent pool governance architecture, circumstantial agent activation protocols, and the Phase 0 assessment criteria for Agentic AI readiness — is available to subscribers.

This isn’t incremental. It inverts a 30-year power dynamic — from technology dictating practice to practitioners governing technology.


Related: When an MIT Scientist Agrees: Adapt the Formula to Reality

-30-

Posted in: Commentary