Browsing All posts tagged under »philosophy«

Is Microsoft’s 2026 Agentic ERP Architecture A Scaled Version Of The 2007 Mendocino Project — And What Has The Dynamics 365 Team Figured Out That The Public Positioning Has Not Yet Surfaced?

May 12, 2026

0

"In 2018, Microsoft Canada President Kevin Peesker said companies will either 'transform or be transformed.' In 2026, Sameer Verma describes where Microsoft has arrived. What sits underneath?"

Eric Kimberling’s Hidden Cost Of Agile In ERP And The Substrate Underneath It

May 11, 2026

0

"Eric Kimberling's piece on agile in ERP names a mechanism the Compounding Technology Shadow Wave™ work has been documenting at multi-decade scale. Agile does not eliminate shadow behavior. It can unintentionally legitimize it."

The Compounding Technology Shadow Wave™ Trilogy: Executive Summaries

May 9, 2026

0

Three pieces, one framework. The structural pattern AI initiatives are inheriting, the diagnosis that names it, and the financial cost of leaving it unresolved.

When the Constraint Moved: A 2004 Paper, Three Charts, and the Question the Discourse Has Been Avoiding

May 7, 2026

0

"Technology did not diminish because it became weaker. It diminished because the real constraint moved somewhere else." A 2004 paper meets the chart that now proves it.

Why Half of CEOs Believe Their Job Depends on Getting AI Right — And What the Discourse Keeps Missing

May 6, 2026

0

AI Success Requires More Than Truth. It Demands Accuracy.

You Can’t Assess What You Can’t Access: The Power of Independent Longitudinal Verification

May 5, 2026

0

"Incomplete assumptions don't disappear. They accumulate. And when the system scales, reality collects the debt."

Reimagining The Familiar (and Outdated) Iceberg Graphic

May 1, 2026

0

The familiar iceberg graphic addresses 10% of the system. Here is the other 90%.

When the Model Is the Problem: Why Strand Commonality™ and Strand Stability™ Matter More Than Ever in the Age of AI

April 16, 2026

0

The most dangerous moment in any AI deployment isn't when the system fails. It's when it succeeds — at optimizing a model that was never correctly defined in the first place. Here's the question no one has answered.

Oracle Says the Bottleneck Is Trust. The Archive Says It Goes One Layer Deeper.

March 25, 2026

0

The data layer is clean. The process layer has never been verified. That is where the AI promise breaks.

When One Model Says Yes and Five Say Wait: Why Multimodel Validation Matters

February 24, 2026

0

Six models. One question. Five said "probable." One said "certain." The difference is where the credibility lives.