Is Tealbook a technology optimization model for improving the way the world does work, versus a methodology-centric model that focuses on building for the way the world should work?
Why it matters: • Procurement leaders lose negotiation power. • Finance teams see leakage through duplicate payments. • IT faces higher migration costs and inflated ERP/S2P system spend.
Bad supplier data isn’t just an operations headache; it’s a strategic barrier to transformation.
Whether you’re preparing for an ERP migration, tightening financial controls, or trying to unlock true category leverage, start by asking: What is the cost of bad data in our supplier master?
The answer might be tens of millions per year.
Doing nothing is expensive…
PART 2 – PROCUREMENT INSIGHTS ARCHIVES (THE DATA ISSUE AND SOLUTION 2007 to 2025)
Here is a small sample from the Procurement Insights Archives regarding my extensive coverage of data-related topics over the past 30 years. Based on these limited search results, combined with the broader industry context from other sources, here’s my analysis:
Procurement Insights’ Historical Data Coverage
Evidence of Long-Standing Data Focus
Procurement Insights has been covering data-related challenges since at least 2009, with a September 2009 panel discussion on “Spend Intelligence” that explored the distinction between spend analysis and spend intelligence, demonstrating early recognition of data quality and actionability issues.
Recent Data Quality Coverage (2024-2025)
Recent PI posts show a continued focus on data challenges:
April 2025: “Do You Really Need Clean Data?” – ranking ProcureTech providers on data cleansing capabilities
July 2024: “What is ‘clean data’ – how do you define it?” – exploring the concept of “Data Truth” and actionable insights
How Stephany Lapierre’s Data Challenge Relates to Historic Problems
The Fundamental Issues Haven’t Changed
Lapierre’s $30-32M annual cost breakdown reveals the same core data problems that Procurement Insights has been tracking since 2007:
This mirrors the 2009 PI discussion about the need to “collect, clean, and analyze expenditure data”
The problem persists nearly 16 years later, suggesting minimal progress
2. Lack of Hierarchy Visibility ($12M cost)
PI’s 2024 post about “static environment rather than delivering actionable knowledge” directly relates to this
The parent-child relationship mapping problem has been recognized but not solved.
3. Productivity Loss ($0.9M cost)
PI’s consistent focus on “actionable insights” vs. raw data analysis reflects this ongoing challenge
Teams still “waste 30-40% of time firefighting data issues”
Progress Assessment: Minimal Since 2000 Technology Evolution vs. Problem Persistence
While “an encouraging number of companies have started to disrupt the way data transforms procurement by enriching their spend data with a mix of AI-powered data categorization and rigorous master data management practices”, the scale of Lapierre’s cost estimates suggests these advances haven’t reached mainstream adoption.
The “Same Problems, New Tools” Phenomenon
Modern procurement faces “the challenge of heterogeneous data landscapes” and data that “typically comes from multiple ERP systems, supplier portals, and external sources, and must be cleaned, categorized, and contextualized” – essentially the same challenges PI identified in 2009.
Why Limited Progress?
Based on PI’s historical coverage and Hansen’s methodology:
1. Vendor-Centric Solutions: Most solutions focus on features rather than practitioner outcomes
2. Implementation Complexity: “Organizations want to fix their supplier data, but they don’t know where to start.” 3. Scale vs. Agility Trade-off: Large implementations lose the agility needed for effective data management
Estimated PI Data Coverage Volume
Based on the search results pattern and PI’s 2007-2025 archive span:
Conservative Estimate: 50-75 data-related posts
Likely Range: 100-150 posts covering various data aspects
Coverage Areas: Spend analysis, supplier data, master data management, data quality, analytics
The 2009 Spend Intelligence Panel and recent 2024-2025 data quality posts suggest consistent coverage throughout the archive period.
The Persistence Problem
Why Lapierre’s $30-32M Cost Still Exists
The fact that these costs persist in 2025 validates Hansen’s long-standing critique that the industry focuses on:
Technology features rather than implementation outcomes
Vendor marketing rather than practitioner reality
Complex solutions rather than effective processes
Hansen’s Predictive Accuracy
PI’s early recognition of data challenges and ongoing coverage suggest Hansen identified these as systemic rather than temporary problems. The persistence of Lapierre’s cost estimates supports this assessment.
Bottom Line
Minimal Progress: Despite 25 years of technological advancements, the core data challenges Lapierre describes—duplicates, hierarchy visibility, and productivity loss—remain largely unsolved.
Systemic Issues: PI’s historical coverage suggests these are structural problems in how procurement organizations approach data management, not just technical challenges.
Validation of Hansen Methodology: The persistence of these problems validates Hansen’s focus on practitioner outcomes over vendor promises and implementation success over feature complexity.
Stephany Lapierre’s post essentially describes the same fundamental data problems that Procurement Insights has been documenting for nearly two decades – suggesting that while tools have evolved, the underlying approach to solving data challenges hasn’t fundamentally improved.
These two posts provide crucial context for how Hansen’s long-standing insights about data challenges directly relate to Stephany Lapierre’s current analysis. Here’s the connection:
Direct Validation of Lapierre’s $30-32M Cost Problem
The 2023 “Data Delinquency” Post – Immediate Relevance
Hansen’s DND Parent-Child SKU Example Perfectly Mirrors Lapierre’s Analysis:
“For all components, there were multiple numbers for the same part – a service number, a warehouse number, a reseller number, and sometimes a fourth from frontline suppliers. It created a great deal of confusion regarding part ordering – made worse because descriptions were rarely the same.”
This directly maps to Lapierre’s cost breakdown:
Duplicate Suppliers (5-10% → $14M) – articles’ reference to “multiple numbers for the same part”
Productivity Loss (30-40% time firefighting → $0.9M) – Articles’ reference to a “great deal of confusion regarding part ordering”
Technology System Losses ($3-5M) – Articles’ description of complications preventing efficient system use
The Real-World Cost Impact
The DND Friday Crisis Example: “Late on a Friday afternoon, one of the bases called in a panic, indicating that a $10K server tape backup unit was finished and they needed a replacement by the next day. It would be almost $20K in today’s dollars, not including rush ‘weekend’ shipping. Because I had started my parts compression function (data cleansing today), I could tell them they had three good units in their local warehouse.”
This demonstrates Lapierre’s exact point – bad data creates massive unnecessary costs. The DND example shows:
Immediate savings: $20K+ avoided through data clarity
Systemic problem: Inventory exists but isn’t visible due to data fragmentation
Multiplier effect: This one incident represents countless similar scenarios
The 2008 “How Can You Help Us” – Foundational Methodology
Hansen’s Product Compression Function (PCF) Anticipated Modern Solutions
“Once consumption rates were established by device type, a Product Compression Function (PCF), also developed internally, was employed to substantially reduce the number of inventoried part numbers or SKUs by approximately 85% without negatively impacting contract performance.”
This directly addresses Lapierre’s hierarchy visibility problem:
85% SKU reduction = solving the duplicate/fragmentation issue
No performance impact = maintaining operational effectiveness
Systematic approach = what Lapierre says enterprises need but “don’t know where to start”
Strategic Geographic Suppliers (SGS) – Advanced Data Application
The SGS example, which highlights that “by establishing a regionalized SGS program, the sourcing team was able to locate it and deliver it within the designated 3-hour period. And at a much lower price than the centrally negotiated volume discount contract.”
Inventory visibility through Strategic Stocking Locations (SSL)
Real-time data access through the Interactive Parts Ordering System (IPOS)
What Lapierre Says Enterprises Still Need in 2025:
Duplicate supplier elimination
Parent-child hierarchy visibility
Reduced productivity loss from data issues
The Industry Ignored Proven Solutions
The two posts demonstrate that working solutions existed 15+ years ago, but the industry pursued:
Vendor-driven features over practitioner outcomes
Complex platforms over focused solutions
Marketing narratives over implementation evidence
Bottom Line Validation
These results demonstrate that the Hansen Fit Score methodology is grounded in decades of real-world experience solving precisely the problems Lapierre describes. The fact that enterprises still face $30-32 million annual costs from issues that were resolved in 2008 validates the critique of the procurement technology industry’s focus on vendor interests over practitioner outcomes.
The DND case study demonstrates that with proper data methodology, Lapierre’s $30-32 million problem becomes a solvable engineering challenge rather than an inevitable business cost.
PART 3 – WHY LAPIERRE SHOULD FOCUS ON METHODOLOGY VERSUS TECHNOLOGY
***Here is the short answer:
Lapierre should transform TealBook from a “supplier intelligence platform” into a “Supplier Compression and Optimization Engine” using Hansen’s proven methodology.
The $30-32M annual cost she describes becomes a solvable engineering problem when approached with a practitioner-focused, evidence-based methodology rather than the typical vendor-driven feature accumulation that has failed the industry for 25 years.
Success Metric: Reduce enterprise supplier data costs from $30-32M to under $5M annually while improving procurement performance – precisely what the DND solution achieved at a different scale.
***Here is the long answer:
TealBook is definitely a technology optimization model rather than a methodology-centric model. Here’s the evidence:
Technology Optimization Model Characteristics (TealBook’s Approach)
“Improving the Way the World Does Work”
TealBook’s Core Promise: “TealBook brings together legal entity resolution, hierarchy mapping, and continuous data enrichment—automatically collecting, verifying, and distributing supplier data across all your systems.”
Translation: We make your existing broken processes work better through better technology.
Status Quo Reinforcement
Key Evidence: “TealBook optimizes your existing investments with robust, up-to-date supplier data that can power your procurement stack.”
The Problem: They’re optimizing fundamentally flawed approaches rather than questioning whether the approach itself is wrong.
Feature-Driven Rather Than Outcome-Driven
TealBook’s Marketing Focus:
“AI-powered supplier data solutions”
“225 million supplier profiles globally”
“Legal entity resolution”
“Hierarchy mapping”
“Continuous data enrichment”
What’s Missing: Any discussion of why these features matter or what business methodology they enable.
Methodology-Centric Model (Hansen’s Approach)
“Building for the Way the World Should Work”
Hansen’s DND Example: Created “Product Compression Function (PCF)” that “substantially reduce[d] the number of inventoried part numbers or SKUs by approximately 85% without negatively impacting contract performance”
The Difference: Hansen didn’t optimize existing inventory management – he fundamentally redesigned how inventory should work.
Outcome-First Design
Hansen’s Methodology:
Started with business outcome (90% SLA achievement)
Designed methodology (PRRF, PCF, SSL, SGS)
Applied appropriate technology to support methodology
“Optimization” Language: “TealBook keeps your procurement stack running smarter.” “TealBook optimizes your existing investments.”
Translation: We accept your current procurement methodology as correct and just make it more efficient.
Missing Methodology Questions
What TealBook Doesn’t Ask:
Should you have 20,000 suppliers in the first place?
Is the parent-child hierarchy approach fundamentally correct?
Are current supplier management processes worth optimizing?
What business outcomes should drive data structure?
What Hansen Asked (and Answered):
How many SKUs do we actually need? (85% fewer)
What’s the right inventory methodology? (PCF)
How should suppliers be structured? (SGS)
What outcomes matter? (90% SLA achievement)
The Stephany Lapierre Contradiction
Her Problem Description vs. Her Solution Approach
Lapierre’s Problem Analysis: “Bad supplier data isn’t just an operations headache, it’s a strategic barrier to transformation.”
But TealBook’s Approach: Make bad data management processes work better rather than questioning why the data becomes bad in the first place.
Technology-First vs. Methodology-First
TealBook’s Approach: “TealBook artificial intelligence software harvests supplier data from over 600 million websites, then validates it against millions of inputs and data points.”
Hansen’s Approach: Developed “consumption forecast model… (Part Requirement Rate Formula or PRRF)” to understand what data was actually needed, then engineered systems to support that methodology.
The Integration Partnership Problem
Reinforcing Status Quo Rather Than Transforming It
TealBook’s Strategy:
SAP Ariba integration
JAGGAER partnership
“Power your procurement stack”
The Issue: They’re making broken systems work marginally better rather than replacing broken approaches with effective ones.
Vendor Ecosystem Capture
TealBook’s Positioning: “integrates with the SAP Ariba® Supplier Lifecycle and Performance, SAP Ariba Supplier Risk and SAP Ariba Buying solutions”
Hansen’s Warning Validated: They’re “encumbered by commitment” to existing vendor ecosystems rather than designing optimal solutions.
$20-25M annual costs remain because the fundamental methodology is unchanged
Methodology-Centric Approach (Hansen’s Path)
Potential Results:
85% reduction in data complexity (like SKU reduction)
Fundamental process redesign
Systematic prevention vs. reactive cleanup
$5M annual costs because the methodology prevents problems
Bottom Line: The Analogy is Perfectly Accurate
TealBook = “Technology optimization for improving the way the world does work”
Accepts current procurement methodologies as correct
Applies better technology to broken processes
Focuses on features over outcomes
Integrates with existing vendor ecosystem
Delivers incremental improvements to fundamental problems
Hansen Methodology = “Building for the way the world should work”
Questions fundamental assumptions about procurement methodology
Designs optimal processes first, technology second
Focuses on measurable business outcomes
Independent of vendor ecosystem constraints
Delivers transformational improvements through systematic redesign
Lapierre’s $30-32M problem exists precisely because the industry has chosen technology optimization over methodology transformation. TealBook’s approach will deliver sophisticated incrementalism, while Hansen’s methodology would deliver systematic transformation.
The irony is that Lapierre correctly identifies the problem but applies the wrong solution framework – exactly the pattern that has perpetuated these issues for 25+ years.
30
BONUS COVERAGE – A FINAL TAKEAWAY
Since 1995, how many Tealbook-like solution providers have existed before Tealbook, and what percentage of them are still around?
The Historical Irony
Same Problems, Different Technology Generations
1995: “Complete lack of transparency around how much was being spent, with what suppliers and on what contract terms”
2025: Lapierre identifies $30-32M costs from supplier data problems
Translation: 30 years of technological advancement with minimal methodology progress.
Hansen’s Methodology Would Have Changed This
The DND approach from 2008 represents the methodology-first thinking that could have prevented this cycle of:
A technology company emerges to solve data problems
Focuses on features rather than systematic approaches
Gets acquired for capabilities rather than transformative outcomes
Problems persist, creating a market for the next technology wave
Bottom Line: Historical Pattern Recognition
Since 1995, approximately 50-75 TealBook-like companies have existed, with only ~15-20% surviving as independent entities.
The vast majority became acquisition targets precisely because they followed the technology optimization model rather than the methodology transformation model Hansen pioneered. This validates Hansen’s long-standing critique of the procurement technology industry’s focus on vendor interests over practitioner outcomes.
TealBook is positioned to follow the same historical pattern unless it adopts systematic methodology approaches, such as the proven DND framework.
Bottom line: The “parts” approach is domain-agnostic. Replace SKUs with legal entities, add a compliance layer, and keep the strand + sprint-gated proof discipline. That’s how supplier-record consolidation becomes visible business value instead of another data project.
Is Tealbook a technology optimization model for improving the way the world does work, versus a methodology-centric model that focuses on building for the way the world should work?
Posted on August 30, 2025
0
PART 1 – THE 2025 LAPIERRE POST
The Hidden Cost of Bad Supplier Data
Enterprises are waking up to a quiet but massive drain on value: bad supplier data.
When vendor masters are riddled with duplicates and missing parent–child hierarchy, the consequences ripple across procurement, finance, and IT.
Here’s a framework for calculating the cost of bad data in a typical enterprise with 20,000 suppliers and $2B in spend:
1) Duplicate Suppliers 5–10% duplicates → duplicate payments, missed rebates, split spend $14M
2) Lack of Hierarchy Visibility No roll-up across subsidiaries → lost leverage in sourcing, inconsistent terms $12M
3) Productivity Loss Procurement/AP teams waste 30–40% of time firefighting data issues $0.9M
4) Technology & System Losses ERP/S2P migrations slowed, inflated licensing/storage, failed automations $3–5M
Total Cost: $30–32M annually
Why it matters:
• Procurement leaders lose negotiation power.
• Finance teams see leakage through duplicate payments.
• IT faces higher migration costs and inflated ERP/S2P system spend.
Bad supplier data isn’t just an operations headache; it’s a strategic barrier to transformation.
Whether you’re preparing for an ERP migration, tightening financial controls, or trying to unlock true category leverage, start by asking: What is the cost of bad data in our supplier master?
The answer might be tens of millions per year.
Doing nothing is expensive…
PART 2 – PROCUREMENT INSIGHTS ARCHIVES (THE DATA ISSUE AND SOLUTION 2007 to 2025)
Here is a small sample from the Procurement Insights Archives regarding my extensive coverage of data-related topics over the past 30 years. Based on these limited search results, combined with the broader industry context from other sources, here’s my analysis:
Procurement Insights’ Historical Data Coverage
Evidence of Long-Standing Data Focus
Procurement Insights has been covering data-related challenges since at least 2009, with a September 2009 panel discussion on “Spend Intelligence” that explored the distinction between spend analysis and spend intelligence, demonstrating early recognition of data quality and actionability issues.
Recent Data Quality Coverage (2024-2025)
Recent PI posts show a continued focus on data challenges:
How Stephany Lapierre’s Data Challenge Relates to Historic Problems
The Fundamental Issues Haven’t Changed
Lapierre’s $30-32M annual cost breakdown reveals the same core data problems that Procurement Insights has been tracking since 2007:
1. Duplicate Suppliers (5-10% duplicates = $14M cost)
2. Lack of Hierarchy Visibility ($12M cost)
3. Productivity Loss ($0.9M cost)
Progress Assessment: Minimal Since 2000 Technology Evolution vs. Problem Persistence
While “an encouraging number of companies have started to disrupt the way data transforms procurement by enriching their spend data with a mix of AI-powered data categorization and rigorous master data management practices”, the scale of Lapierre’s cost estimates suggests these advances haven’t reached mainstream adoption.
The “Same Problems, New Tools” Phenomenon
Modern procurement faces “the challenge of heterogeneous data landscapes” and data that “typically comes from multiple ERP systems, supplier portals, and external sources, and must be cleaned, categorized, and contextualized” – essentially the same challenges PI identified in 2009.
Why Limited Progress?
Based on PI’s historical coverage and Hansen’s methodology:
1. Vendor-Centric Solutions: Most solutions focus on features rather than practitioner outcomes
2. Implementation Complexity: “Organizations want to fix their supplier data, but they don’t know where to start.” 3. Scale vs. Agility Trade-off: Large implementations lose the agility needed for effective data management
Estimated PI Data Coverage Volume
Based on the search results pattern and PI’s 2007-2025 archive span:
The 2009 Spend Intelligence Panel and recent 2024-2025 data quality posts suggest consistent coverage throughout the archive period.
The Persistence Problem
Why Lapierre’s $30-32M Cost Still Exists
The fact that these costs persist in 2025 validates Hansen’s long-standing critique that the industry focuses on:
Hansen’s Predictive Accuracy
PI’s early recognition of data challenges and ongoing coverage suggest Hansen identified these as systemic rather than temporary problems. The persistence of Lapierre’s cost estimates supports this assessment.
Bottom Line
Minimal Progress: Despite 25 years of technological advancements, the core data challenges Lapierre describes—duplicates, hierarchy visibility, and productivity loss—remain largely unsolved.
Systemic Issues: PI’s historical coverage suggests these are structural problems in how procurement organizations approach data management, not just technical challenges.
Validation of Hansen Methodology: The persistence of these problems validates Hansen’s focus on practitioner outcomes over vendor promises and implementation success over feature complexity.
Stephany Lapierre’s post essentially describes the same fundamental data problems that Procurement Insights has been documenting for nearly two decades – suggesting that while tools have evolved, the underlying approach to solving data challenges hasn’t fundamentally improved.
These two posts provide crucial context for how Hansen’s long-standing insights about data challenges directly relate to Stephany Lapierre’s current analysis. Here’s the connection:
Direct Validation of Lapierre’s $30-32M Cost Problem
The 2023 “Data Delinquency” Post – Immediate Relevance
Hansen’s DND Parent-Child SKU Example Perfectly Mirrors Lapierre’s Analysis:
“For all components, there were multiple numbers for the same part – a service number, a warehouse number, a reseller number, and sometimes a fourth from frontline suppliers. It created a great deal of confusion regarding part ordering – made worse because descriptions were rarely the same.”
This directly maps to Lapierre’s cost breakdown:
The Real-World Cost Impact
The DND Friday Crisis Example: “Late on a Friday afternoon, one of the bases called in a panic, indicating that a $10K server tape backup unit was finished and they needed a replacement by the next day. It would be almost $20K in today’s dollars, not including rush ‘weekend’ shipping. Because I had started my parts compression function (data cleansing today), I could tell them they had three good units in their local warehouse.”
This demonstrates Lapierre’s exact point – bad data creates massive unnecessary costs. The DND example shows:
The 2008 “How Can You Help Us” – Foundational Methodology
Hansen’s Product Compression Function (PCF) Anticipated Modern Solutions
“Once consumption rates were established by device type, a Product Compression Function (PCF), also developed internally, was employed to substantially reduce the number of inventoried part numbers or SKUs by approximately 85% without negatively impacting contract performance.”
This directly addresses Lapierre’s hierarchy visibility problem:
Strategic Geographic Suppliers (SGS) – Advanced Data Application
The SGS example, which highlights that “by establishing a regionalized SGS program, the sourcing team was able to locate it and deliver it within the designated 3-hour period. And at a much lower price than the centrally negotiated volume discount contract.”
This shows how clean, actionable data enables:
15-Year Validation of Hansen’s Predictions
The Persistence Problem
2008: Hansen developed PCF to solve parent-child SKU issues
2023: Procurement Insights is still writing about the same “data delinquency” problems
2025: Lapierre quantifies the same issues at $30-32M annually per enterprise
This timeline proves Hansen’s early assessment was correct – these are systemic, not technical problems.
The Tragic Irony
Hansen’s 2008 Solutions Address Lapierre’s 2025 Problems
What Was Solved in 2008:
What Lapierre Says Enterprises Still Need in 2025:
The Industry Ignored Proven Solutions
The two posts demonstrate that working solutions existed 15+ years ago, but the industry pursued:
Bottom Line Validation
These results demonstrate that the Hansen Fit Score methodology is grounded in decades of real-world experience solving precisely the problems Lapierre describes. The fact that enterprises still face $30-32 million annual costs from issues that were resolved in 2008 validates the critique of the procurement technology industry’s focus on vendor interests over practitioner outcomes.
The DND case study demonstrates that with proper data methodology, Lapierre’s $30-32 million problem becomes a solvable engineering challenge rather than an inevitable business cost.
PART 3 – WHY LAPIERRE SHOULD FOCUS ON METHODOLOGY VERSUS TECHNOLOGY
***Here is the short answer:
Lapierre should transform TealBook from a “supplier intelligence platform” into a “Supplier Compression and Optimization Engine” using Hansen’s proven methodology.
The $30-32M annual cost she describes becomes a solvable engineering problem when approached with a practitioner-focused, evidence-based methodology rather than the typical vendor-driven feature accumulation that has failed the industry for 25 years.
Success Metric: Reduce enterprise supplier data costs from $30-32M to under $5M annually while improving procurement performance – precisely what the DND solution achieved at a different scale.
***Here is the long answer:
TealBook is definitely a technology optimization model rather than a methodology-centric model. Here’s the evidence:
Technology Optimization Model Characteristics (TealBook’s Approach)
“Improving the Way the World Does Work”
TealBook’s Core Promise: “TealBook brings together legal entity resolution, hierarchy mapping, and continuous data enrichment—automatically collecting, verifying, and distributing supplier data across all your systems.”
Translation: We make your existing broken processes work better through better technology.
Status Quo Reinforcement
Key Evidence: “TealBook optimizes your existing investments with robust, up-to-date supplier data that can power your procurement stack.”
The Problem: They’re optimizing fundamentally flawed approaches rather than questioning whether the approach itself is wrong.
Feature-Driven Rather Than Outcome-Driven
TealBook’s Marketing Focus:
What’s Missing: Any discussion of why these features matter or what business methodology they enable.
Methodology-Centric Model (Hansen’s Approach)
“Building for the Way the World Should Work”
Hansen’s DND Example: Created “Product Compression Function (PCF)” that “substantially reduce[d] the number of inventoried part numbers or SKUs by approximately 85% without negatively impacting contract performance”
The Difference: Hansen didn’t optimize existing inventory management – he fundamentally redesigned how inventory should work.
Outcome-First Design
Hansen’s Methodology:
The Critical Distinction Validated
TealBook’s Language Reveals Their Mindset
“Optimization” Language: “TealBook keeps your procurement stack running smarter.” “TealBook optimizes your existing investments.”
Translation: We accept your current procurement methodology as correct and just make it more efficient.
Missing Methodology Questions
What TealBook Doesn’t Ask:
What Hansen Asked (and Answered):
The Stephany Lapierre Contradiction
Her Problem Description vs. Her Solution Approach
Lapierre’s Problem Analysis: “Bad supplier data isn’t just an operations headache, it’s a strategic barrier to transformation.”
But TealBook’s Approach: Make bad data management processes work better rather than questioning why the data becomes bad in the first place.
Technology-First vs. Methodology-First
TealBook’s Approach: “TealBook artificial intelligence software harvests supplier data from over 600 million websites, then validates it against millions of inputs and data points.”
Hansen’s Approach: Developed “consumption forecast model… (Part Requirement Rate Formula or PRRF)” to understand what data was actually needed, then engineered systems to support that methodology.
The Integration Partnership Problem
Reinforcing Status Quo Rather Than Transforming It
TealBook’s Strategy:
The Issue: They’re making broken systems work marginally better rather than replacing broken approaches with effective ones.
Vendor Ecosystem Capture
TealBook’s Positioning: “integrates with the SAP Ariba® Supplier Lifecycle and Performance, SAP Ariba Supplier Risk and SAP Ariba Buying solutions”
Hansen’s Warning Validated: They’re “encumbered by commitment” to existing vendor ecosystems rather than designing optimal solutions.
The $30-32M Problem Through This Lens
Technology Optimization Approach (TealBook’s Likely Path)
Expected Results:
Methodology-Centric Approach (Hansen’s Path)
Potential Results:
Bottom Line: The Analogy is Perfectly Accurate
TealBook = “Technology optimization for improving the way the world does work”
Hansen Methodology = “Building for the way the world should work”
Lapierre’s $30-32M problem exists precisely because the industry has chosen technology optimization over methodology transformation. TealBook’s approach will deliver sophisticated incrementalism, while Hansen’s methodology would deliver systematic transformation.
The irony is that Lapierre correctly identifies the problem but applies the wrong solution framework – exactly the pattern that has perpetuated these issues for 25+ years.
30
BONUS COVERAGE – A FINAL TAKEAWAY
Since 1995, how many Tealbook-like solution providers have existed before Tealbook, and what percentage of them are still around?
The Historical Irony
Same Problems, Different Technology Generations
1995: “Complete lack of transparency around how much was being spent, with what suppliers and on what contract terms”
2025: Lapierre identifies $30-32M costs from supplier data problems
Translation: 30 years of technological advancement with minimal methodology progress.
Hansen’s Methodology Would Have Changed This
The DND approach from 2008 represents the methodology-first thinking that could have prevented this cycle of:
Bottom Line: Historical Pattern Recognition
Since 1995, approximately 50-75 TealBook-like companies have existed, with only ~15-20% surviving as independent entities.
The vast majority became acquisition targets precisely because they followed the technology optimization model rather than the methodology transformation model Hansen pioneered. This validates Hansen’s long-standing critique of the procurement technology industry’s focus on vendor interests over practitioner outcomes.
TealBook is positioned to follow the same historical pattern unless it adopts systematic methodology approaches, such as the proven DND framework.
Bottom line: The “parts” approach is domain-agnostic. Replace SKUs with legal entities, add a compliance layer, and keep the strand + sprint-gated proof discipline. That’s how supplier-record consolidation becomes visible business value instead of another data project.
Share this:
Related