It Isn’t The Instrument — It’s The Musician

Posted on November 29, 2025

0


Why the Agentic AI Architecture I Built in 1998 Still Works in 2025

By Jon W. Hansen | Procurement Insights | November 2025


Old technology. Timeless principles. The instrument doesn’t make the music — the musician does.


A Saturday Morning Thought

Look at that ThinkPad.

The letters on some keys are worn away. The built-in trackpad stopped working years ago — hence the USB mouse. The screen is barely large enough to hold a conversation.

And yet, on that screen: a 6-model AI architecture — independent agents whose convergence signals confidence and whose divergence signals the need for deeper examination — producing methodology that Fortune 500 companies will pay for. A framework that’s been validated across 27 years and 180+ transformation cases. A conversation with AI that moves at the speed of thought.

The industry tells us we need the latest hardware, the biggest screens, the fastest chips, the most expensive infrastructure to “do AI.”

The ThinkPad disagrees.

It isn’t the instrument. It’s the musician.


1998: Before “Agentic AI” Had a Name

In 1998, funded by Canada’s Scientific Research and Experimental Development (SR&ED) program, I built something for the Department of National Defence that didn’t have a name yet.

It was:

  • Agent-based — autonomous components making decisions within defined parameters
  • Self-learning — algorithms that adapted based on patterns and outcomes
  • Convergent — multiple analytical threads validating against each other before action
  • Human-governed — oversight embedded at critical decision points

The technology was primitive by today’s standards. But the architecture wasn’t.

The result: 97.3% delivery accuracy and 23% cost reduction in a complex MRO procurement ecosystem.

We didn’t call it “agentic AI” because the term didn’t exist. We called it RAM — the Relational Acquisition Model. But the structure was there: autonomous agents, self-learning adaptation, convergence validation, human-in-the-loop governance.

What the industry began branding as “agentic AI” in the 2020s, we were already deploying in 1998.


The Architecture That Doesn’t Age

Here’s what I’ve learned across 27 years of building and refining this approach:

The technology changes. The physics doesn’t.

In 1998, I used the tools available — relational databases, rule-based systems, early machine learning techniques.

In 2025, I will use six, soon to be twelve, different large language models running in parallel, converging on validated outputs.

Different instruments. Same music.

The principles that made RAM 1998 work for the DND are the same principles that make RAM 2025 work today:

  • Multiple independent agents — whether human analysts or AI models, diversity of perspective matters
  • Convergence before confidence — agreement across independent threads signals truth; divergence signals the need for deeper examination
  • Readiness before deployment — the organization’s ability to absorb change determines success, not the sophistication of the tool
  • Human governance at critical junctures — autonomy without oversight produces speed without wisdom

These aren’t technology features. They’re architectural principles. They worked before AI had a public name. They’ll work after whatever comes next.


The ThinkPad Lesson

That worn-out ThinkPad on my desk is proof of concept.

Not proof that old hardware can run new software — though it can.

Proof that capability lives in the methodology, not the machine.

You could hand this ThinkPad to a thousand people and they wouldn’t produce what I produce on it. You could hand them a $5,000 workstation and they still wouldn’t produce it.

The difference isn’t processing power. It’s pattern recognition. It’s 27 years of watching transformations succeed and fail. It’s understanding that technology doesn’t determine outcomes — readiness does.

The instrument doesn’t make the musician. The musician makes the music.


What This Means for Agentic AI in 2025

The industry is racing toward autonomous systems. Agentic AI. Multi-agent architectures. Self-learning algorithms. Orchestration platforms.

And they’re making the same mistake they made with ERP. With e-sourcing. With SaaS. With analytics. With early AI.

They’re asking: “What tool should we buy?”

They should be asking: “Is our organization ready to operate autonomous systems?”

Because agentic AI amplifies everything — including unreadiness.

When the failure took 18 months, organizations could survive being unprepared.

When the failure takes 18 seconds, they can’t.


The Continuity

In 1998, I built one of the earliest operational agent-based, self-learning AI systems in procurement and supply.

In 2025, I’m using the same architectural principles — updated for modern technology — to help organizations prepare for autonomous systems before they deploy them.

The methodology evolved. The physics remained constant.

And that beat-up ThinkPad? It’s still running. Still producing. Still proving that the instrument was never the point.

The point is what you know. How you think. What you’ve learned across decades of watching the same patterns repeat.

AI didn’t teach me how to think. I recognized in AI the patterns I’d been using since before it had a name.

The echo finally returned. And it sounds exactly like what I sent out in 1998.


This article is part of an ongoing series on organizational readiness and transformation success. For more on how to assess your organization’s readiness for autonomous systems, visit procureinsights.com.

30


Jon W. Hansen is the CEO of Hansen Models and creator of the Hansen Fit Score methodology. His work in agent-based procurement systems began in 1998 with SR&ED-funded research for Canada’s Department of National Defence and has maintained accuracy rates between 85% and 97.3% across 27 years of successive refinement.

#AgenticAI #OrganizationalReadiness #HansenFitScore #RAM2025 #TransformationPhysics #ReadinessFirst #PhaseZero

*** A FINAL THOUGHT: “And if you think this ThinkPad looks archaic, you should have seen what we were running in 1998.” ***

Posted in: Commentary