Advisory Team Dialogue: When Don Osborn Asked About AI Agents—And Exposed an Industry Irony

Posted on November 14, 2025

0


Don Osborn (HFS Advisory Team) sent me the 2025 AI Agent trends and asked: “Have you tried any of these tools?”

My answer:

Yes. I use 6 agent types daily, expanding to 12 in the new year: Agentic RAG, DeepResearch, Coding Agents, CUA (Computer-using agents).

I use self-learning algorithms to correlate information using pattern recognition frameworks. The models “argue” with each other and admit when they’re wrong or when another model has superior insights.

Then Don asked the critical question:

“IBM testing for ACP—would they use symmetrical data or real-world industrial data?”

Here’s why this question reveals a profound irony:

IBM’s RS/6000 (1990s)—during Don’s tenure—was built on agent-based architecture that integrated disparate data streams. This was strand commonality in practice: recognizing that complex systems require managing messy, interconnected data, not clean, uniform inputs.

My 1998 RAM system for DND used the same principle: agent-based models handling real-world data complexity.

Here’s what that looked like in practice:

Before building DND’s AI procurement system, I asked: “What time of day do orders typically come in?”

Their answer: “We don’t track that.”

Not “we can’t share that data.” Not “it’s classified.”

They didn’t track it.

Symmetrical testing data assumes clean, formatted, tracked information exists.

Real-world deployment reveals gaps like “We don’t track that.”

That’s why real-world deployments fail: organizations assume readiness when they haven’t identified the hidden and very real gaps.

In short: Technology validation ≠ organizational readiness.

Here’s the irony Don’s question exposes:

IBM built RS/6000 understanding that agent-based systems must handle disparate data streams (strand commonality). Don lived that principle for decades at IBM.

Now IBM is developing ACP in 2025. Will they test it using real-world industrial data complexity—as their own 1990s systems were designed to handle—or will they use clean symmetrical test data that ignores those hard-won lessons?

For example, ACP might work perfectly in testing with symmetrical data. But if procurement teams can’t answer “what time of day do orders come in?”—they’re not ready to deploy multi-agent communication protocols.

This is Hansen Fit Score’s Phase 0 assessment: measuring the readiness gaps that determine whether sophisticated technology becomes a productivity tool or another failed implementation.

What AI agent approaches are you exploring? And what basic operational questions reveal your organization’s actual readiness?

More importantly: Will 2025 AI agent testing apply the lessons IBM learned with RS/6000 in the 1990s—or will the industry repeat the testing-vs-deployment gap that’s driven 80% failure rates for decades?


📺 BONUS COVERAGE – THE DND COMPLETE STORY

Skeptical that a simple question reveals organizational readiness?

Watch the full 13-minute conversation from 1998—when RAM applied strand commonality principles to real-world procurement. This is what Phase 0 assessment looked like before anyone called it that.


#AIAgents #OrganizationalReadiness #HansenFitScore #StrandCommonality #AdvisoryTeam

Posted in: Commentary