Why Amazon Lens Live in 2025 Reminds Me of a Ball Camera in 1999
Ata Elyas shared a post this week about Amazon’s Lens Live — an AI-powered camera feature that lets you shop what you see in real time. Point your phone at a product, and Amazon identifies it, matches it to their catalog, and enables purchase.
It’s impressive. It’s also familiar.
1999: The Parent-Child SKU Nightmare
At the Department of National Defence, we faced a problem that was costing time, money, and mission readiness: Wrong Part Shipped (WPS) incidents.
The root cause wasn’t bad suppliers or poor systems. It was a semantic alignment failure.
The same physical part could have 3-4 different part numbers depending on who was looking at it:
- The OEM part number
- The service part number
- The warehouse part number
- The supplier’s part number
A technician in the field, a buyer at headquarters, and a supplier in the US were all talking about the same part — but using different languages. The result: confusion, delays, wrong shipments, and frustrated stakeholders.
The Theory Behind the Solution
The ball camera trial I ran in 1999 wasn’t improvisation. It was the proof-of-concept of a framework I had already formalized.
In 1998, the Government of Canada’s Scientific Research and Experimental Development (SR&ED) program funded my research into a theory I called strand commonality — the idea that seemingly disparate strands of data actually have attributes that are related and collectively impact outcomes.
The resulting framework was the Relational Acquisition Model (RAM) — built for the Department of National Defence to address exactly the kind of multi-agent coordination problem we faced with service parts.
The critical insight was that agents aren’t just human. The system I was coordinating included:
Human agents:
- Technicians diagnosing problems in the field
- Buyers matching parts to catalogs
- Warehouse staff managing inventory
- Suppliers fulfilling orders
Non-human agents:
- Databases with conflicting part numbers
- ERP systems with rigid taxonomies
- Courier networks with delivery constraints
- Customs processes with clearance requirements
Each agent — human and non-human — had its own logic, constraints, and language. Alignment wasn’t about getting people on the same page. It was about getting the entire network of agents synchronized around a shared understanding.
What was missing in 1998 wasn’t the theory. It was non-human agents capable of participating at human speed.
Now they exist.
The Solution Wasn’t Technology. It Was Alignment.
At the time, camera technology was in its infancy. But there was something called a “ball camera” — a basic device you could plug into a system to capture images.
I ran a trial program.
Strategic technicians in the field were given cameras. When they needed a part, they would photograph it — including the visible part number — and send it to the buyer. The buyer would then:
- Match the actual part to the catalog listing
- Confirm it was the correct item
- Reconcile the different numbers in the database
- Ensure the supplier received the right specification
The goal was simple: ensure that all agents — human and non-human — were synchronized around the same physical reality.
The camera didn’t solve the problem.
Shared understanding solved the problem. The camera was just the tool that enabled it.
The Methodology That Made It Work
Here’s what I didn’t do:
I didn’t acquire a camera and then ask, “What can we do with this?”
Here’s what I did:
- Identified the problem — WPS incidents and SKU confusion
- Mapped the agents — Technician, buyer, supplier, warehouse, databases, couriers, customs
- Understood why they were misaligned — Different numbering systems, no shared visual reference, conflicting system logic
- Selected technology that served alignment — Camera for visual confirmation
The camera succeeded because it was chosen to serve a purpose that was already understood — within a theoretical framework that had already been funded and validated.
The Inversion Most Organizations Make
Today, the pattern is reversed:
- “We have AI — what can we do with it?”
- “We have catalogs — how do we get people to use them?”
- “We have a semantic layer — what problems does it solve?”
Technology first. Problem second. Agents last.
That’s the sequence that produces the 80% failure rate.
Amazon Lens Live in 2025
Amazon’s new feature is elegant: point, identify, purchase.
But Amazon controls the entire ecosystem — the catalog, the suppliers, the fulfillment, the data. There’s no parent-child SKU nightmare because there’s only one taxonomy. There’s no agent misalignment because Amazon is the agent.
Enterprise procurement doesn’t have that luxury.
In the enterprise, the same product has different names in different systems, different owners in different departments, and different meanings to different stakeholders. Human agents and non-human agents operate with different logic, different incentives, and different constraints.
Visual identification is necessary. But it’s not sufficient.
What made the 1999 ball camera trial work wasn’t the camera.
It was the agent-based thinking — formalized in 1998, funded by SR&ED, and deployed at DND — that preceded it.
The Bottom Line
Technology doesn’t create alignment. It amplifies whatever state the organization is already in.
If agents are aligned — if technicians, buyers, suppliers, databases, and systems share understanding — technology accelerates outcomes.
If they’re not, technology accelerates confusion.
The question isn’t “What AI should we deploy?”
The question is: “Are the agents — human and non-human — ready to be on the same page?”
I asked that question in 1998 when I developed strand commonality and RAM.
I proved it in 1999 with a ball camera.
It’s the same question Phase 0 asks today.
— Procurement Insights
-30-
I Didn’t Build a Camera and Look for Ways to Use It
Posted on December 22, 2025
0
Why Amazon Lens Live in 2025 Reminds Me of a Ball Camera in 1999
Ata Elyas shared a post this week about Amazon’s Lens Live — an AI-powered camera feature that lets you shop what you see in real time. Point your phone at a product, and Amazon identifies it, matches it to their catalog, and enables purchase.
It’s impressive. It’s also familiar.
1999: The Parent-Child SKU Nightmare
At the Department of National Defence, we faced a problem that was costing time, money, and mission readiness: Wrong Part Shipped (WPS) incidents.
The root cause wasn’t bad suppliers or poor systems. It was a semantic alignment failure.
The same physical part could have 3-4 different part numbers depending on who was looking at it:
A technician in the field, a buyer at headquarters, and a supplier in the US were all talking about the same part — but using different languages. The result: confusion, delays, wrong shipments, and frustrated stakeholders.
The Theory Behind the Solution
The ball camera trial I ran in 1999 wasn’t improvisation. It was the proof-of-concept of a framework I had already formalized.
In 1998, the Government of Canada’s Scientific Research and Experimental Development (SR&ED) program funded my research into a theory I called strand commonality — the idea that seemingly disparate strands of data actually have attributes that are related and collectively impact outcomes.
The resulting framework was the Relational Acquisition Model (RAM) — built for the Department of National Defence to address exactly the kind of multi-agent coordination problem we faced with service parts.
The critical insight was that agents aren’t just human. The system I was coordinating included:
Human agents:
Non-human agents:
Each agent — human and non-human — had its own logic, constraints, and language. Alignment wasn’t about getting people on the same page. It was about getting the entire network of agents synchronized around a shared understanding.
What was missing in 1998 wasn’t the theory. It was non-human agents capable of participating at human speed.
Now they exist.
The Solution Wasn’t Technology. It Was Alignment.
At the time, camera technology was in its infancy. But there was something called a “ball camera” — a basic device you could plug into a system to capture images.
I ran a trial program.
Strategic technicians in the field were given cameras. When they needed a part, they would photograph it — including the visible part number — and send it to the buyer. The buyer would then:
The goal was simple: ensure that all agents — human and non-human — were synchronized around the same physical reality.
The camera didn’t solve the problem.
Shared understanding solved the problem. The camera was just the tool that enabled it.
The Methodology That Made It Work
Here’s what I didn’t do:
I didn’t acquire a camera and then ask, “What can we do with this?”
Here’s what I did:
The camera succeeded because it was chosen to serve a purpose that was already understood — within a theoretical framework that had already been funded and validated.
The Inversion Most Organizations Make
Today, the pattern is reversed:
Technology first. Problem second. Agents last.
That’s the sequence that produces the 80% failure rate.
Amazon Lens Live in 2025
Amazon’s new feature is elegant: point, identify, purchase.
But Amazon controls the entire ecosystem — the catalog, the suppliers, the fulfillment, the data. There’s no parent-child SKU nightmare because there’s only one taxonomy. There’s no agent misalignment because Amazon is the agent.
Enterprise procurement doesn’t have that luxury.
In the enterprise, the same product has different names in different systems, different owners in different departments, and different meanings to different stakeholders. Human agents and non-human agents operate with different logic, different incentives, and different constraints.
Visual identification is necessary. But it’s not sufficient.
What made the 1999 ball camera trial work wasn’t the camera.
It was the agent-based thinking — formalized in 1998, funded by SR&ED, and deployed at DND — that preceded it.
The Bottom Line
Technology doesn’t create alignment. It amplifies whatever state the organization is already in.
If agents are aligned — if technicians, buyers, suppliers, databases, and systems share understanding — technology accelerates outcomes.
If they’re not, technology accelerates confusion.
The question isn’t “What AI should we deploy?”
The question is: “Are the agents — human and non-human — ready to be on the same page?”
I asked that question in 1998 when I developed strand commonality and RAM.
I proved it in 1999 with a ball camera.
It’s the same question Phase 0 asks today.
— Procurement Insights
-30-
Share this:
Related