By Jon W. Hansen | Procurement Insights | February 2026
Barilla is one of the most impressive supply chain stories in the world. Family-owned since 1877. Thirty years of institutional learning about demand variability. A Databricks and Azure data foundation covering 90% of business functions before they ever selected a planning platform. When they chose Solution o9, they were already ready.
Solution o9 did the job it was asked to do. But Barilla’s success had almost nothing to do with Solution o9 — and everything to do with the conditions Barilla built over three decades.
This distinction matters. Because right now, somewhere in the world, a CPO is reading the Barilla case study and thinking: If we buy Solution o9, we can do what Barilla did.
They almost certainly cannot. And the reason is not that Solution o9 is a bad platform. The reason is that the conditions which produced Barilla’s outcomes do not transfer through a case study. They never have.
The 80% Problem Nobody Talks About
The ProcureTech implementation failure rate has held at approximately 80% for three decades. Technology has improved enormously over that period. Platforms are more powerful, more integrated, more intelligent than anything available in the 1990s. The failure rate hasn’t moved.
This tells us something the industry has been unwilling to say: the variable that determines success is not the technology.
Virginia’s eVA program proved this first. The Commonwealth implemented Ariba and achieved adoption growth from 1% to 80% of addressable spend over seven years. As I wrote in 2007: eVA’s effectiveness had little to do with the technology and more to do with the methodology the Virginia brain trust employed.
Then Ontario’s OECM proved it from the other direction. Same Ariba platform. Same sector. OECM had Virginia’s playbook — literally, their PowerPoint presentation. They lost $20 million. Same technology, opposite outcome. The difference was not the software. The difference was the organizational conditions.
Barilla confirms the pattern a third time. Extraordinary institutional readiness produces extraordinary outcomes — regardless of which platform is selected. And a fourth case, the County of Santa Clara, demonstrated that when readiness conditions erode, outcomes degrade even after initial success.
Introducing the Hansen Practitioner Transferability Score™
The Hansen Practitioner Transferability Score™ (PTS) is a diagnostic instrument that answers a question no vendor will ask you: Can your organization reproduce the outcomes described in this case study?
PTS does not measure how good a vendor is. That is what the Hansen Fit Score™ does. PTS measures whether the conditions that produced a vendor’s documented success exist in your organization.
It works by measuring six behavioral indicators — what we call the Hansen Strand Commonality™ — that predict implementation outcomes before a single dollar is spent on technology. These six strands have been observed, without exception, to co-exist in every documented success case and to be absent in every documented failure case across the Procurement Insights archive spanning 2007 to 2025.
The six strands are:
1. Complexity Acknowledgment. Does the organization map its internal diversity, or flatten it into uniform categories? Virginia mapped thousands of buying organizations across the Commonwealth. OECM flattened diverse member institutions into uniform tiers.
2. Process Ownership. Is the initiative owned by a business process leader — a CPO or VP Supply Chain — or by IT, a PMO, or an external consultant? Every success case in the archive is procurement-led. Every failure case has ownership elsewhere.
3. Institutional Learning Depth. Can the organization document prior failures and articulate what changed? Barilla traces institutional learning to a 1990s Harvard case study on the bullwhip effect. OECM had no documented learning cycle — they went directly from Virginia’s presentation to implementation.
4. Data Foundation Precedence. Was data governance solved before the platform arrived, or is the technology expected to fix data problems? Barilla built their Databricks/Azure foundation before selecting Solution o9. Organizations that expect planning tools to fix data entropy are repeating a pattern that has never worked.
5. Incentive Architecture Alignment. Is the vendor rewarded for adoption and outcomes, or for deployment and licensing? Is there a single accountable leader with multi-year budget authority? Misaligned incentives erase even good methodology.
6. Adoption Patience. Does the project plan measure adoption outcomes at 6, 12, and 24 months, or deployment milestones? Virginia’s seven-year adoption curve from 1% to 80% was visible because they measured it. Organizations that define success as “go-live” stop measuring before the outcome is knowable.
Each strand is scored 0, 1, or 2. The maximum score is 12. Organizations scoring 10–12 can reasonably expect to achieve 80–95% of a reference case’s outcomes. Organizations scoring 0–3 are in the OECM failure pattern — case study outcomes are not transferable regardless of technology quality.
What This Means for Barilla — and for You
Barilla scores 12 out of 12 on the PTS. Every strand is present, clearly evidenced, and deeply embedded in the organization’s operating culture.
But here is the critical finding: Barilla’s score tells you about Barilla, not about Solution o9.
If your organization scores 5 out of 12, the Transferability Gap is 7. That means the conditions that produced Barilla’s outcomes are largely absent in your organization. Buying Solution o9 does not close that gap. No technology closes that gap. Only organizational readiness work — what we call Phase 0 — addresses the conditions that the PTS reveals.
This is not a criticism of Solution o9, or of Ariba, or of any platform. It is a structural observation about how success actually works in ProcureTech. The technology is the instrument. The organization is the musician. A Stradivarius does not make you a violinist.
The Practitioner’s New Question
Before reading any vendor case study, practitioners can now ask a different question. Not: Does this vendor’s technology do what I need? That is the Hansen Fit Score™ question, and it matters. But alongside it: Do I have the conditions that made this case study’s outcomes possible?
The PTS Self-Assessment takes 30 minutes. Six questions, one per strand. The score tells you whether a given case study is relevant to your organization — or whether it is, as the data consistently shows, a marketing artifact dressed as a decision aid.
The full Hansen Practitioner Transferability Score™ Methodology Specification is available as a public reference document. It includes the complete scoring framework, calibration cases (Virginia eVA, Barilla, OECM, and Santa Clara), verification protocols, and a rapid self-assessment worksheet.
Case studies do not transfer success. Readiness patterns do.
The Hansen Practitioner Transferability Score™ was developed through RAM 2025™ multimodel assessment and validated independently by five AI models analyzing the same evidence base. PTS is the client-side companion to the Hansen Fit Score™. Together, they predict transformation success before money is spent — and verify it afterward.
For Phase 0 readiness assessment or HFS vendor evaluation inquiries, contact Procurement Insights.
Hansen Models™ | Hansen Method™ | Hansen Strand Commonality™ | Hansen Fit Score™ | Hansen Practitioner Transferability Score™ | RAM 2025™ are trademarks of Hansen Models™.
-30-
Barilla Did Not Succeed Because of Solution o9. And That Changes Everything.
Posted on February 16, 2026
0
By Jon W. Hansen | Procurement Insights | February 2026
Barilla is one of the most impressive supply chain stories in the world. Family-owned since 1877. Thirty years of institutional learning about demand variability. A Databricks and Azure data foundation covering 90% of business functions before they ever selected a planning platform. When they chose Solution o9, they were already ready.
Solution o9 did the job it was asked to do. But Barilla’s success had almost nothing to do with Solution o9 — and everything to do with the conditions Barilla built over three decades.
This distinction matters. Because right now, somewhere in the world, a CPO is reading the Barilla case study and thinking: If we buy Solution o9, we can do what Barilla did.
They almost certainly cannot. And the reason is not that Solution o9 is a bad platform. The reason is that the conditions which produced Barilla’s outcomes do not transfer through a case study. They never have.
The 80% Problem Nobody Talks About
The ProcureTech implementation failure rate has held at approximately 80% for three decades. Technology has improved enormously over that period. Platforms are more powerful, more integrated, more intelligent than anything available in the 1990s. The failure rate hasn’t moved.
This tells us something the industry has been unwilling to say: the variable that determines success is not the technology.
Virginia’s eVA program proved this first. The Commonwealth implemented Ariba and achieved adoption growth from 1% to 80% of addressable spend over seven years. As I wrote in 2007: eVA’s effectiveness had little to do with the technology and more to do with the methodology the Virginia brain trust employed.
Then Ontario’s OECM proved it from the other direction. Same Ariba platform. Same sector. OECM had Virginia’s playbook — literally, their PowerPoint presentation. They lost $20 million. Same technology, opposite outcome. The difference was not the software. The difference was the organizational conditions.
Barilla confirms the pattern a third time. Extraordinary institutional readiness produces extraordinary outcomes — regardless of which platform is selected. And a fourth case, the County of Santa Clara, demonstrated that when readiness conditions erode, outcomes degrade even after initial success.
Introducing the Hansen Practitioner Transferability Score™
The Hansen Practitioner Transferability Score™ (PTS) is a diagnostic instrument that answers a question no vendor will ask you: Can your organization reproduce the outcomes described in this case study?
PTS does not measure how good a vendor is. That is what the Hansen Fit Score™ does. PTS measures whether the conditions that produced a vendor’s documented success exist in your organization.
It works by measuring six behavioral indicators — what we call the Hansen Strand Commonality™ — that predict implementation outcomes before a single dollar is spent on technology. These six strands have been observed, without exception, to co-exist in every documented success case and to be absent in every documented failure case across the Procurement Insights archive spanning 2007 to 2025.
The six strands are:
1. Complexity Acknowledgment. Does the organization map its internal diversity, or flatten it into uniform categories? Virginia mapped thousands of buying organizations across the Commonwealth. OECM flattened diverse member institutions into uniform tiers.
2. Process Ownership. Is the initiative owned by a business process leader — a CPO or VP Supply Chain — or by IT, a PMO, or an external consultant? Every success case in the archive is procurement-led. Every failure case has ownership elsewhere.
3. Institutional Learning Depth. Can the organization document prior failures and articulate what changed? Barilla traces institutional learning to a 1990s Harvard case study on the bullwhip effect. OECM had no documented learning cycle — they went directly from Virginia’s presentation to implementation.
4. Data Foundation Precedence. Was data governance solved before the platform arrived, or is the technology expected to fix data problems? Barilla built their Databricks/Azure foundation before selecting Solution o9. Organizations that expect planning tools to fix data entropy are repeating a pattern that has never worked.
5. Incentive Architecture Alignment. Is the vendor rewarded for adoption and outcomes, or for deployment and licensing? Is there a single accountable leader with multi-year budget authority? Misaligned incentives erase even good methodology.
6. Adoption Patience. Does the project plan measure adoption outcomes at 6, 12, and 24 months, or deployment milestones? Virginia’s seven-year adoption curve from 1% to 80% was visible because they measured it. Organizations that define success as “go-live” stop measuring before the outcome is knowable.
Each strand is scored 0, 1, or 2. The maximum score is 12. Organizations scoring 10–12 can reasonably expect to achieve 80–95% of a reference case’s outcomes. Organizations scoring 0–3 are in the OECM failure pattern — case study outcomes are not transferable regardless of technology quality.
What This Means for Barilla — and for You
Barilla scores 12 out of 12 on the PTS. Every strand is present, clearly evidenced, and deeply embedded in the organization’s operating culture.
But here is the critical finding: Barilla’s score tells you about Barilla, not about Solution o9.
If your organization scores 5 out of 12, the Transferability Gap is 7. That means the conditions that produced Barilla’s outcomes are largely absent in your organization. Buying Solution o9 does not close that gap. No technology closes that gap. Only organizational readiness work — what we call Phase 0 — addresses the conditions that the PTS reveals.
This is not a criticism of Solution o9, or of Ariba, or of any platform. It is a structural observation about how success actually works in ProcureTech. The technology is the instrument. The organization is the musician. A Stradivarius does not make you a violinist.
The Practitioner’s New Question
Before reading any vendor case study, practitioners can now ask a different question. Not: Does this vendor’s technology do what I need? That is the Hansen Fit Score™ question, and it matters. But alongside it: Do I have the conditions that made this case study’s outcomes possible?
The PTS Self-Assessment takes 30 minutes. Six questions, one per strand. The score tells you whether a given case study is relevant to your organization — or whether it is, as the data consistently shows, a marketing artifact dressed as a decision aid.
The full Hansen Practitioner Transferability Score™ Methodology Specification is available as a public reference document. It includes the complete scoring framework, calibration cases (Virginia eVA, Barilla, OECM, and Santa Clara), verification protocols, and a rapid self-assessment worksheet.
Case studies do not transfer success. Readiness patterns do.
The Hansen Practitioner Transferability Score™ was developed through RAM 2025™ multimodel assessment and validated independently by five AI models analyzing the same evidence base. PTS is the client-side companion to the Hansen Fit Score™. Together, they predict transformation success before money is spent — and verify it afterward.
For Phase 0 readiness assessment or HFS vendor evaluation inquiries, contact Procurement Insights.
Hansen Models™ | Hansen Method™ | Hansen Strand Commonality™ | Hansen Fit Score™ | Hansen Practitioner Transferability Score™ | RAM 2025™ are trademarks of Hansen Models™.
-30-
Share this:
Related