What is continuous, self-cleaning data?

Posted on April 15, 2025

0


To evaluate how the Agent-based Metaprise model, leveraging self-learning algorithms in the Department of National Defence (DND) implementation case study over seven years, addresses clean data challenges compared to the Intake and Orchestration model, we’ll extend our prior discussions (failure rates, CAGR, DND context) by integrating self-learning algorithms into Hansen’s framework and contrasting it with the tech-heavy Intake and Orchestration approach.

The focus remains on clean data’s role in overcoming ProcureTech’s 50-70% failure rate, ensuring adoption, and driving CAGR.

Clean data is not a ‘once done’ and static task but requires a continuous learning loopback process for self-learning algorithms. Otherwise, formerly clean data will revert to its prior state of inaccuracy. In an overly simplified example, why must you launder your clothes or wash the dishes repeatedly?” – Jon Hansen, Procurement Insights (1998 to 2025)

Context: DND Implementation and Self-Learning Algorithms

  • DND Background: Hansen’s DND project (1998-2005) transformed procurement for Canada’s Department of National Defence, streamlining purchasing (e.g., MRO products) amid bureaucratic silos, legacy systems, and strict compliance needs. Cited as a success in Procurement Insights (2007-2014), it embodied Hansen’s agent-based approach, likely a precursor to the “Metaprise” model, prioritizing the shared relationship between human agent experience and expertise and technology-based AI agents.
  • Metaprise in DND: As discussed, Hansen’s model initially used human agents (procurement officers, program managers, and external partners like suppliers) to best understand how to manage processes and data adaptively, aligning with DND’s realities (security, diverse suppliers) before transferring it to self-learning algorithms. Thus, it avoided rigid “equation-based doom loops” by empowering users to curate contextually relevant data before introducing it into the algorithm model as part of their “regular” daily tasks. Hansen then used a progressive self-learning Loopback Process versus a process like SAP’s Joule Feedback Loops to manage human agent input at the top of the funnel, with AI agents taking over the process through to confirmed product delivery. That is how Hansen got around the rigidity issue.
  • Self-Learning Algorithms: Adding self-learning algorithms (e.g., machine learning models that improve via feedback) to Metaprise enhanced human agents’ ability to refine data and decisions before introducing it into the learning loop. For DND, the algorithms would then learn from historical performance and real-time priorities (e.g., supplier vetting patterns) to ensure cleaned data to optimize desired outcomes. At the same time, human agents retained control to ensure relevance and compliance through a weighted importance mechanism. The accuracy of the algorithm output improved with each new transaction, ultimately leading to a 97.3% accuracy/delivery rate to support the DND IT infrastructure across the country.
  • Data Challenges: the DND contract faced fragmented data (silos across branches), legacy inconsistencies, human errors, and high-stakes accuracy needs—mirroring ProcureTech’s 20+ year clean data struggle.

Understanding the Models

Metaprise with Self-Learning Algorithms (DND Case Study):

  • Core Idea: Hansen’s Metaprise envisions procurement as a network of agents (human practitioners and technology systems/AI agents) making localized decisions. Self-learning algorithms augment this by analyzing agent actions (e.g., data entries, supplier choices) to improve data quality and process efficiency, while agents guide outcomes to fit DND’s context.
  • Key Features:
    • Practitioner-Led: Agents (DND buyers) own workflows to teach self-learning algorithms.
    • Data Approach: Algorithms learn from agent-validated data to flag errors (e.g., duplicate suppliers), suggest standardizations, or predict needs – including end-to-end product delivery to the right destination.
    • Adaptability: Flexible to DND’s silos, legacy systems, and security rules, iterating via human-AI collaboration.
    • Goal: Boost adoption, cut failures (50-70%), and drive CAGR through retention by delivering reliable outcomes.
  • Algorithm Role: Acts as a supportive tool—e.g., learning to refine contract data from initial human agent feedback/input—keeping humans in the loop before fully transitioning the process to the self-learning algorithms.

Intake and Orchestration Model:

  • Core Idea: Promoted by vendors like Zip, GEP, and ORO Labs (2023-2025), this model uses centralized intake portals (forms, AI chatbots) and orchestration engines to streamline requests and integrate systems (ERP, sourcing) via automation and AI, aiming for enterprise efficiency.
  • Key Features:
    • Centralized Tech: Standardizes inputs and automates workflows, often using AI for insights (e.g., spend analysis).
    • Data Approach: Requires clean, uniform data to feed AI and APIs, vulnerable to errors if inputs or backends are inconsistent – which they always are.
    • Automation Focus: Prioritizes speed and scale, reducing manual tasks but relying on system interoperability.
    • Goal: Drive adoption and growth, though fragile against dirty data, risking churn and capping CAGR.
  • Algorithm Role: Heavily integrated, driving decisions (e.g., supplier ranking), but errors amplify without clean data.

Why Metaprise with Algorithms Excels in DND Case Scenario

Hansen’s model, enhanced with self-learning algorithms, outperforms Intake and Orchestration because:

  • Resilience to Messiness: Cooperating human and AI Agents clean data contextually, with algorithms amplifying accuracy navigating silos, legacy systems, and resistance—the “broken leg” (enterprise dysfunction). Orchestration’s AI crashes without pristine data, as 70% of AI fails (McKinsey 2022).
  • Human-AI Synergy: Human agents initially guide algorithms by introducing pertinent data, ensuring compliance and trust, unlike orchestration’s GIGO risks. DND’s success (2007) reflects Hansen’s user focus, cutting 50-70% failure rates.
  • Adaptability: Fits complexity (security, scale), avoiding integration flops (60-75%, Gartner 2018), while algorithms learn iteratively, unlike orchestration’s rigid standardization.

“The critical part of the human-led agent-based model is that the continuous learning capabilities of the algorithms occur by adapting to how internal and external human agents work in the real world in the end-to-end supply chain to effectively and efficiently manage and grade performance metrics from order placement through to order delivery. In essence, the Agent-Based Metaprise model performs a self-cleaning of the data as a part of the human and technology agents’ daily routine inputs becoming a seamless part of the order to fulfillment process.”

Results over seven seven-year DND case study period are as follows:

  • Improvement from 50% next-day national and international delivery to 97.3% within 3-months
  • Year-over-year COG savings of 23%
  • An FTE reduction across multiple supporting organizations from 23 to 3 in 18 months

Modern Context (2025)

Intake and Orchestration’s polish and scale (e.g., GEP’s AI) suit clean environments, but as is the case with most organizations today, non-agent-based chaos mirrors ProcureTech’s ongoing data woes. Metaprise’s human-AI blend better handles real-world grit, ensuring retention and modest CAGR over orchestration’s riskier growth, prone to churn when data falters.

Conclusion

Hansen’s Agent-based Metaprise model with self-learning algorithms, as applied in the DND case study over seven years, addresses clean data challenges by empowering human agents to initially curate accurate, contextual data from which the self-learning AI algorithms progressively and continuously learn from normal-course-of-business multi-agent inputs throughout the end-to-end supply chain to achieve optimal results. This tackles silos, legacy systems, resistance, and compliance needs, avoiding ProcureTech’s 50-70% failure rate and driving CAGR via adoption and trust.

The Intake and Orchestration model, reliant on clean inputs and centralized AI, risks errors and churn in a messy environment struggling with inconsistent data and cultural gaps. Metaprise’s agent-guided AI offers a robust, adaptive solution for complex settings, outperforming orchestration’s fragile tech focus.

CLOSING THOUGHTS: ProcureTech’s Role vs. Enterprise Reality:

  • Vendors like Coupa or GEP can’t fully control enterprise variables—cultural resistance, budget constraints, or IT bottlenecks. Hansen’s “agent-based” model pushes practitioner ownership to bridge this gap, reducing failures by empowering users, not just leaning on tech.
  • If failures are enterprise-driven, the ProcureTech failure rate is a red herring. It’s like saying 80% of runners lose races due to injuries—true, but the injury (integration) is the real issue, not running (ProcureTech). Vendors still suffer, as failures hurt growth, but solutions lie in better onboarding and change management, not just better tools. The Agent-based Metaprise model provides better onboarding while reducing change management requirements.
  • CAGR Tie-In: Enterprise issues inflate Customer Acquisition Cost (CAC) and churn, slowing CAGR. A 2023 HBR study notes retention is 5-25x cheaper than acquisition, so fixing integration to keep clients is critical for compounding growth.

30

Posted in: Commentary