The industry thinks AI is either replacing us or serving us. They’re missing the third option: flying with us.
30
The Question That Started This
A colleague recently asked me a deceptively simple question:
“How will your framework handle non-human workflow — AI that operates autonomously within processes?”
It’s the question everyone in procurement and enterprise technology is asking right now. Gartner calls it “Autonomous Business.” The ecosystem calls it “Agentic AI.” Executives call it “the thing that’s keeping me up at night.”
But the question itself reveals the problem: we’re still thinking about AI the wrong way.
The Airline Analogy
Think about how a plane operates:
The Pilot flies the plane. Primary decision-maker. Sets intent. Handles exceptions. Responsible for outcomes.
The Co-pilot sits beside them. Same training. Monitors, supports, challenges, backs up. Learns with the pilot through shared flight hours.
The Autopilot executes defined parameters. Cruise control for the sky. Alleviates cognitive load on specific tasks. Follows rules. Does not learn dynamically.
Now here’s the question most people never ask:
How did the autopilot and co-pilot learn to do their tasks?
RAM 1998: The Autopilot Era
In 1998, I built the RAM (Relational Acquisition Model) system for Canada’s Department of National Defence. It achieved 97.3% delivery accuracy.
Here’s how it worked:
- Human sets the rules — historic supplier performance, real-time requirements, information guardrails
- Algorithms execute — within defined parameters
- Unilateral engagement — I set, it runs
- Learning is programmed — not dynamic
This was autopilot. Sophisticated autopilot. But autopilot nonetheless.
The human was in control. The system was a tool.
The Fear of 2026: Replacement
Fast forward to today. The narrative has shifted to fear:
- “AI will replace us”
- “Autonomous systems will make decisions without us”
- “We’re surrendering control to algorithms”
This is the replacement mindset — the belief that AI is either a tool we control or a force that controls us.
It’s a false binary. And it’s why 80% of implementations fail.
RAM 2025: The Co-pilot Era
What I’ve built with RAM 2025 — multimodel, multilevel human-AI engagement — is neither autopilot nor replacement.
It’s co-pilot.
The co-pilot model isn’t about control or surrender. It’s about shared governance.
The Problem: 1998 Thinking in a 2025 World
Here’s what I see across the industry:
“Today’s world is still thinking in terms of 1998 — surrendering control rather than co-developing and co-managing the guardrails.”
The ecosystem is stuck in two broken mindsets:
- “AI is a tool” — Treat it like autopilot. Set the rules. Let it run. One and done.
- “AI will replace us” — Fear it. Resist it. Or surrender to it.
Neither understands the co-pilot model: continuous engagement, mutual learning, co-developed governance.
That’s why implementations fail. Organizations deploy technology expecting autopilot behavior, then panic when they realize AI isn’t static — it’s adaptive. But they haven’t built the relationship to adapt WITH it.
How Co-pilots Learn
Here’s the key insight:
A human co-pilot doesn’t learn by being programmed. They learn by flying together.
Thousands of hours. Shared experiences. Real-time feedback. Progressive trust built through demonstrated competence — and through catching each other’s mistakes.
RAM 2025 works the same way:
- Multimodel engagement — Multiple AI models cross-verify each other’s outputs
- Multilevel collaboration — Human and AI working across strategic, operational, and tactical layers
- Continuous loop — Not prompt-response, but ongoing dialogue that refreshes and refines
- Equal agents — Both human and AI can be wrong. Both learn through interaction.
The guardrails aren’t set once and forgotten. They’re co-managed — refreshed and relevant based on continuous engagement, not initial programming.
The Outcome Difference
What used to take me weeks — research, cross-referencing, verification, documentation — now takes hours. Not because AI replaced my thinking, but because AI flies with me.
The multimodel cross-verification catches what any single perspective would miss. Including mine.
The Mindset Shift
Here’s what the industry needs to understand:
Stop thinking of AI as non-human and a prompt-driven black box. Start thinking of it as an equal agent who — like humans — can be wrong, and through continuous engagement can progressively learn and improve outcomes the same way humans can.
That’s not anthropomorphizing AI. It’s recognizing that:
- Humans are agents
- AI models are agents
- Both operate within systems
- Both can err
- Both can learn
- Collaboration beats control
Why This Matters for Agentic AI
The ecosystem is racing toward “autonomous business” and “agentic AI” without answering the fundamental question:
If we can’t govern human workflows effectively — and the 80% failure rate says we can’t — how will we govern autonomous ones?
The answer isn’t more control (1998 autopilot). The answer isn’t less control (2026 surrender). The answer is shared governance — human and AI co-developing the guardrails through continuous engagement.
The ecosystem is selling autopilot. The future requires co-pilots.
The Path Forward
Phase 0 still applies — maybe even more — in the agentic era.
Before you deploy autonomous workflows, ask:
- Has the organization demonstrated it can govern human workflows effectively?
- Are decision rights, exception handling, and data semantics aligned?
- Is there a continuous engagement loop — or just one-time, transactional deployments?
- Who co-manages the guardrails as conditions change?
If you can’t answer these questions, you’re not ready for agentic AI. You’re just buying more sophisticated autopilot — and hoping it doesn’t crash.
The Bottom Line
Autopilot: I set the rules, you execute. (1998)
Replacement: You set the rules, I react. (Fear)
Co-pilot: We develop the rules together, continuously. (2025)
The industry is arguing about the first two while missing the third entirely.
That’s why the failure rate hasn’t moved.
That’s why readiness still matters.
That’s why the obvious thing is still obvious.
Are you deploying autopilot and expecting co-pilot results? That’s the gap the 80% fell into.
Related Posts:
-30-
Autopilot vs. Co-pilot: Why the Ecosystem Is Still Stuck in 1998
Posted on January 5, 2026
0
The industry thinks AI is either replacing us or serving us. They’re missing the third option: flying with us.
30
The Question That Started This
A colleague recently asked me a deceptively simple question:
It’s the question everyone in procurement and enterprise technology is asking right now. Gartner calls it “Autonomous Business.” The ecosystem calls it “Agentic AI.” Executives call it “the thing that’s keeping me up at night.”
But the question itself reveals the problem: we’re still thinking about AI the wrong way.
The Airline Analogy
Think about how a plane operates:
The Pilot flies the plane. Primary decision-maker. Sets intent. Handles exceptions. Responsible for outcomes.
The Co-pilot sits beside them. Same training. Monitors, supports, challenges, backs up. Learns with the pilot through shared flight hours.
The Autopilot executes defined parameters. Cruise control for the sky. Alleviates cognitive load on specific tasks. Follows rules. Does not learn dynamically.
Now here’s the question most people never ask:
RAM 1998: The Autopilot Era
In 1998, I built the RAM (Relational Acquisition Model) system for Canada’s Department of National Defence. It achieved 97.3% delivery accuracy.
Here’s how it worked:
This was autopilot. Sophisticated autopilot. But autopilot nonetheless.
The human was in control. The system was a tool.
The Fear of 2026: Replacement
Fast forward to today. The narrative has shifted to fear:
This is the replacement mindset — the belief that AI is either a tool we control or a force that controls us.
It’s a false binary. And it’s why 80% of implementations fail.
RAM 2025: The Co-pilot Era
What I’ve built with RAM 2025 — multimodel, multilevel human-AI engagement — is neither autopilot nor replacement.
It’s co-pilot.
The co-pilot model isn’t about control or surrender. It’s about shared governance.
The Problem: 1998 Thinking in a 2025 World
Here’s what I see across the industry:
The ecosystem is stuck in two broken mindsets:
Neither understands the co-pilot model: continuous engagement, mutual learning, co-developed governance.
That’s why implementations fail. Organizations deploy technology expecting autopilot behavior, then panic when they realize AI isn’t static — it’s adaptive. But they haven’t built the relationship to adapt WITH it.
How Co-pilots Learn
Here’s the key insight:
A human co-pilot doesn’t learn by being programmed. They learn by flying together.
Thousands of hours. Shared experiences. Real-time feedback. Progressive trust built through demonstrated competence — and through catching each other’s mistakes.
RAM 2025 works the same way:
The guardrails aren’t set once and forgotten. They’re co-managed — refreshed and relevant based on continuous engagement, not initial programming.
The Outcome Difference
What used to take me weeks — research, cross-referencing, verification, documentation — now takes hours. Not because AI replaced my thinking, but because AI flies with me.
The multimodel cross-verification catches what any single perspective would miss. Including mine.
The Mindset Shift
Here’s what the industry needs to understand:
That’s not anthropomorphizing AI. It’s recognizing that:
Why This Matters for Agentic AI
The ecosystem is racing toward “autonomous business” and “agentic AI” without answering the fundamental question:
The answer isn’t more control (1998 autopilot). The answer isn’t less control (2026 surrender). The answer is shared governance — human and AI co-developing the guardrails through continuous engagement.
The ecosystem is selling autopilot. The future requires co-pilots.
The Path Forward
Phase 0 still applies — maybe even more — in the agentic era.
Before you deploy autonomous workflows, ask:
If you can’t answer these questions, you’re not ready for agentic AI. You’re just buying more sophisticated autopilot — and hoping it doesn’t crash.
The Bottom Line
Autopilot: I set the rules, you execute. (1998)
Replacement: You set the rules, I react. (Fear)
Co-pilot: We develop the rules together, continuously. (2025)
The industry is arguing about the first two while missing the third entirely.
That’s why the failure rate hasn’t moved.
That’s why readiness still matters.
That’s why the obvious thing is still obvious.
Are you deploying autopilot and expecting co-pilot results? That’s the gap the 80% fell into.
Related Posts:
-30-
Share this:
Related