This week, Snowflake made a strategic investment that says more about the state of enterprise AI than any product announcement could.
By backing AtScale and elevating the semantic layer from a modeling concern to foundational infrastructure, Snowflake is making a clear statement:
AI cannot be trusted without shared meaning.
That framing matters. Because it signals a shift the market has resisted for decades — not a technology breakthrough, but a model breakthrough.
What Snowflake Is Actually Saying
Strip away the funding headlines and architecture debates, and the message is remarkably simple:
Consistent metrics matter
Shared definitions matter
Business logic and context matter
Governance is no longer optional
Without these, AI doesn’t become intelligent — it becomes confidently inconsistent.
Snowflake calls this the “layer before the layer.”
In practical terms: data alone is not enough. Meaning has to be stabilized before automation can be trusted.
That insight is not new.
What’s new is that AI has made the cost of ignoring it impossible to hide.
This Isn’t Theory. I Built It.
In 1998, with funding from Canada’s SR&ED program, I developed what we called the Relational Acquisition Model (RAM) — an agent-based procurement system designed around a principle that sounds familiar now: shared meaning across stakeholders is the prerequisite for system success.
The underlying concept was what I called Metaprise — an independently connected network of stakeholders, each with their own objectives and constraints, synchronized through shared understanding rather than forced into a single sequential chain.
The system was deployed for Canada’s Department of National Defence.
The result: 97.3% delivery accuracy.
Not 97.3% data consistency. Delivery accuracy — the measure that matters when supplies have to reach the right place at the right time.
That work led to a $12 million company acquisition in 2001.
The methodology wasn’t about aligning data definitions across platforms. It was about aligning organizational behavior — synchronizing the agents (people, departments, suppliers) whose decisions determined whether the system worked or failed.
Snowflake is investing in data semantics.
What I proved in 1998 — what Metaprise demonstrated — was organizational semantics. And it could be measured, built, and scaled.
Why I Kept Writing About It
If the methodology worked, why did I spend the next two decades documenting principles that had already been validated?
Because the market wasn’t ready to hear it.
In 2007 and 2008, I wrote extensively about the same underlying concepts — translated into the language of the emerging digital supply chain:
From “Similarity Heuristics, Iterative Methodologies and the Emergence of the Modern Supply Chain” (April 2008):
“While the attribute tags themselves can remain constant, their assigned weighted value — including those of the corresponding sub-attributes — can and do change dynamically at the transactional level.”
From “Supply Chain Confidence” (February 2008):
“Before you can quantify or measure the level of confidence within an organization, you have to first understand and quantify the individual and collective objectives of the diverse stakeholders both within and external to the enterprise itself.”
I called it strand commonality. Agent-based versus equation-based modeling. The difference between synchronized and sequential architectures.
Snowflake calls it “the layer before the layer.”
Same insight. Different decade.
The difference: I had already proven it worked. The industry just wasn’t asking the question yet.
But Here’s What a Semantic Layer Can’t Tell You
Watch this brief video.
A semantic layer can tell you that “Order” means the same thing across five systems. It can govern the definition of “On-Time Delivery” so Finance and Operations stop arguing.
But it cannot tell you to ask: What time of day do orders come in?
That question doesn’t live in a data dictionary. It lives in the experience of someone who knows that a Friday 3pm order has completely different implications than a Monday 9am order — for fulfillment, for supplier behavior, for risk.
That’s not data semantics.
That’s organizational semantics — the procedural knowledge, the behavioral context, the human understanding that no metric definition can encode.
That’s what Metaprise was designed to capture. That’s what the 1998 system proved could work. That’s what Phase 0 assesses today. And that’s what’s still missing from the conversation Snowflake is starting.
The Gap That Remains
Snowflake’s semantic layer aligns data.
What it cannot do — on its own — is align organizations.
You can have:
perfectly governed metrics
beautifully modeled hierarchies
pristine semantic logic
…and still deploy AI into an enterprise that lacks:
clear decision rights
ownership of outcomes
readiness for autonomous action
This is where most initiatives still fail — not because the data was wrong, but because the organization wasn’t ready to absorb what the data revealed.
That upstream validation — what I call Phase 0 — is the missing complement to today’s semantic push.
Without it, semantics risks becoming another form of integration theater: necessary, impressive, but insufficient to change outcomes.
What This Moment Represents
I’m not saying Snowflake is late. They’re not. They’re right on time — for 2025.
AI made the cost of semantic incoherence visible at scale. When your LLM hallucinates revenue numbers because “margin” means six different things across six systems, you can’t ignore it anymore.
The technology forced the conversation.
And that’s why this moment matters to me.
The conversation I’ve been trying to have — since 1998, since SR&ED, since DND, since the 2007-2008 documentation — is finally possible.
That’s not vindication. That’s an opening.
The question isn’t whether semantics matters. Snowflake just answered that with a $74 billion market cap behind it.
The question is: which semantics?
Data semantics alone won’t solve the readiness problem. You also need organizational semantics — shared meaning across stakeholders, not just shared definitions across databases.
The Bottom Line
Semantic infrastructure is essential.
It is also incomplete on its own.
To escape the 70–80% failure cycle that has followed every major technology wave, organizations need both:
semantic alignment at the data layer
readiness and ownership (semantic alignment) at the organizational layer
I proved the second one worked in 1998. The industry is finally ready to talk about the first one in 2025.
When Semantics Becomes Strategy: The Conversation That’s Finally Possible
Posted on December 19, 2025
0
This week, Snowflake made a strategic investment that says more about the state of enterprise AI than any product announcement could.
By backing AtScale and elevating the semantic layer from a modeling concern to foundational infrastructure, Snowflake is making a clear statement:
AI cannot be trusted without shared meaning.
That framing matters. Because it signals a shift the market has resisted for decades — not a technology breakthrough, but a model breakthrough.
What Snowflake Is Actually Saying
Strip away the funding headlines and architecture debates, and the message is remarkably simple:
Without these, AI doesn’t become intelligent — it becomes confidently inconsistent.
Snowflake calls this the “layer before the layer.”
In practical terms: data alone is not enough. Meaning has to be stabilized before automation can be trusted.
That insight is not new.
What’s new is that AI has made the cost of ignoring it impossible to hide.
This Isn’t Theory. I Built It.
In 1998, with funding from Canada’s SR&ED program, I developed what we called the Relational Acquisition Model (RAM) — an agent-based procurement system designed around a principle that sounds familiar now: shared meaning across stakeholders is the prerequisite for system success.
The underlying concept was what I called Metaprise — an independently connected network of stakeholders, each with their own objectives and constraints, synchronized through shared understanding rather than forced into a single sequential chain.
The system was deployed for Canada’s Department of National Defence.
The result: 97.3% delivery accuracy.
Not 97.3% data consistency. Delivery accuracy — the measure that matters when supplies have to reach the right place at the right time.
That work led to a $12 million company acquisition in 2001.
The methodology wasn’t about aligning data definitions across platforms. It was about aligning organizational behavior — synchronizing the agents (people, departments, suppliers) whose decisions determined whether the system worked or failed.
Snowflake is investing in data semantics.
What I proved in 1998 — what Metaprise demonstrated — was organizational semantics. And it could be measured, built, and scaled.
Why I Kept Writing About It
If the methodology worked, why did I spend the next two decades documenting principles that had already been validated?
Because the market wasn’t ready to hear it.
In 2007 and 2008, I wrote extensively about the same underlying concepts — translated into the language of the emerging digital supply chain:
From “Similarity Heuristics, Iterative Methodologies and the Emergence of the Modern Supply Chain” (April 2008):
From “Supply Chain Confidence” (February 2008):
I called it strand commonality. Agent-based versus equation-based modeling. The difference between synchronized and sequential architectures.
Snowflake calls it “the layer before the layer.”
Same insight. Different decade.
The difference: I had already proven it worked. The industry just wasn’t asking the question yet.
But Here’s What a Semantic Layer Can’t Tell You
Watch this brief video.
A semantic layer can tell you that “Order” means the same thing across five systems. It can govern the definition of “On-Time Delivery” so Finance and Operations stop arguing.
But it cannot tell you to ask: What time of day do orders come in?
That question doesn’t live in a data dictionary. It lives in the experience of someone who knows that a Friday 3pm order has completely different implications than a Monday 9am order — for fulfillment, for supplier behavior, for risk.
That’s not data semantics.
That’s organizational semantics — the procedural knowledge, the behavioral context, the human understanding that no metric definition can encode.
That’s what Metaprise was designed to capture. That’s what the 1998 system proved could work. That’s what Phase 0 assesses today. And that’s what’s still missing from the conversation Snowflake is starting.
The Gap That Remains
Snowflake’s semantic layer aligns data.
What it cannot do — on its own — is align organizations.
You can have:
…and still deploy AI into an enterprise that lacks:
This is where most initiatives still fail — not because the data was wrong, but because the organization wasn’t ready to absorb what the data revealed.
That upstream validation — what I call Phase 0 — is the missing complement to today’s semantic push.
Without it, semantics risks becoming another form of integration theater: necessary, impressive, but insufficient to change outcomes.
What This Moment Represents
I’m not saying Snowflake is late. They’re not. They’re right on time — for 2025.
AI made the cost of semantic incoherence visible at scale. When your LLM hallucinates revenue numbers because “margin” means six different things across six systems, you can’t ignore it anymore.
The technology forced the conversation.
And that’s why this moment matters to me.
The conversation I’ve been trying to have — since 1998, since SR&ED, since DND, since the 2007-2008 documentation — is finally possible.
That’s not vindication. That’s an opening.
The question isn’t whether semantics matters. Snowflake just answered that with a $74 billion market cap behind it.
The question is: which semantics?
Data semantics alone won’t solve the readiness problem. You also need organizational semantics — shared meaning across stakeholders, not just shared definitions across databases.
The Bottom Line
Semantic infrastructure is essential.
It is also incomplete on its own.
To escape the 70–80% failure cycle that has followed every major technology wave, organizations need both:
I proved the second one worked in 1998. The industry is finally ready to talk about the first one in 2025.
Now maybe we can talk about both.
— Procurement Insights
-30-
Share this:
Related