By Jon W. Hansen | Procurement Insights | April 2026
In 2013, Aaron Levie posted a line that has been quoted in technology circles ever since.
“Uber is a $3.5 billion lesson in building for how the world should work instead of optimizing for how the world does work.”
It is a good line. It captures something real about the difference between optimizing existing systems and building toward systems that do not yet exist. I have been citing it in my own work since April 2014 — applied first to the Internet of Things and Humans framing, then to digitization-versus-digitalization in 2023, and most recently to the multi-agent AI alignment I documented in August 2025. Twelve years. Three distinct technology moments. One consistent architectural posture.
The line still works. But after watching how thoughtful practitioners are responding to the orchestration-transparency argument I have been developing across my recent posts, I want to take the line one step further — across the finish line that Levie’s framing leaves open.
Here is the issue.
Most organizations are not building for how the world works. They are optimizing broken systems. Faster procurement workflows on top of unchanged organizational physics. AI agents bolted onto deployment patterns that were already failing before the agents arrived. Transformation initiatives that automate the dysfunction rather than address it.
But they are not building for how the world should work either. They are scaling untested assumptions, now faster, with AI. The smart-trigger transactional model I wrote about earlier this week is exactly this pattern. The deployment looks like progress because the technology is sophisticated, but the engagement architecture underneath has not changed, which means the failure modes have not changed either. They have just been accelerated.
Levie’s framing names the right destination. It does not name what is required to reach it.
What is required is the validation layer that connects ambition to operational reality.
Why This Has Become Unavoidable in 2026
In recent days I have been watching how sophisticated practitioners — accomplished attorneys, experienced ProcureTech founders, senior procurement leaders — are responding to the orchestration-transparency argument I published on April 26.
The pattern is striking. Smart, accomplished people are reading the post seriously and arriving at conclusions that operate inside familiar frameworks rather than engaging the architectural shift the post describes.
A contracts attorney reads the orchestration-transparency argument and reasons by analogy to note-taking apps and record-keeping decisions. The reasoning is correct inside the contracts-and-evidence framework they work in daily. It does not engage the structural distinction the argument actually makes — that the transcript itself is not the validation layer; the structured record of how the decision was composed is the validation layer.
A ProcureTech founder reads the same argument and treats the model black box and the orchestration black box as parallel concerns of equal weight. Technically defensible, inside the AI-capability framework. Strategically incomplete, because the post argues the two are categorically different — one solvable today, one not, with the solvable one being the layer regulators will require first.
Both responses are intelligent. Both miss the architectural point. And both miss it for the same reason — the practitioners are operating inside the rules and frameworks they have spent careers mastering, while the architectural argument is operating one layer up, in territory their disciplines have not yet developed vocabulary for.
That is what Levie’s quote actually points at. The world should work in ways that the rules and frameworks have not yet caught up to. The practitioners working effectively inside the current rules are not wrong about the rules. They are working effectively inside a frame that the architectural shift is in the process of displacing.
The Distinction the Levie Quote Cannot Carry On Its Own
“Build for how the world should work” is the directional claim. It is correct as far as it goes. What it does not specify is the operational difference between the old world and the new one in any given category — and in 2026, the AI procurement category is where that difference is becoming most consequential.
The old world: AI tools produce outputs, users consume them, accountability is anchored on the user’s interpretation of the output.
The new world: AI tools produce outputs through documented orchestration logic, users validate the orchestration as part of the decision process, accountability is anchored on the design of the orchestration architecture before deployment.
The old world: A black box is a black box. Either you trust the output or you don’t.
The new world: There are two black boxes. The model black box (intrinsic, unsolved, research-grade) and the orchestration black box (engineering, solvable today, regulatory-grade). The legal posture that “the AI made an error” is weakening. The legal posture that “we cannot tell you how the AI reached this decision” is structurally indefensible once orchestration transparency exists as an alternative.
The old world: AI deployment is a productivity question. Capability multiplier, output volume, time saved.
The new world: AI deployment is a design-output question. Pre-incident validation, orchestration audit trail, human-agent logic in the documented record.
Each of these distinctions was logically available to anyone looking at AI deployment carefully. None of them was named publicly until the architectural shift forced the naming. That is what “build for how the world should work” requires in operational form — the willingness to articulate the distinctions the old framework hides, before the new framework has institutional vocabulary to express them.
What This Has Meant Across My Own Archive
I want to be honest that this is not a new claim from me. It is the same architectural posture I have been holding in dated public form for over a decade, applied to whatever the current technology moment requires.
In April 2014, I wrote two pieces on the same day. The first, What IBM Was to Mainframes and Coupa Was to SaaS, Nipendo Is to the P2P Cloud, used the original Levie Uber quote in full to argue that the procurement world was on the brink of a transformational surge most practitioners would not anticipate quickly enough. The key, I wrote then, was “the ability to think beyond the familiar way of doing things, and embracing the prospects of what is actually possible. Even if said possibilities have not yet been fully recognized and contemplated.” That sentence is twelve years old. It is also a precise description of what happens when sophisticated practitioners encounter the orchestration-transparency argument today.
The second piece, published the same day, established a dedicated section of the Procurement Insights blog around the Internet of Things and Humans framing. The IoTH framing came from Tim O’Reilly’s April 2014 Forbes article and made an architecturally specific move: “so many of the most interesting applications of the Internet of Things involve new ways of thinking about how humans and things cooperate differently when the things get smarter.” That sentence — written twelve years ago, about a different technology — is the early articulation of what the orchestration-transparency post calls human-agent logic. The vocabulary has evolved. The architectural claim has not. What was named conceptually in 2014 as humans and things cooperating differently is now operationalized in ARA™-driven RAM 2025™ through the Human Language Interface™ (HLI™) — the natural-language layer through which human agents direct multi-model substrates without specialized prompts or learned syntax.
In 2016, Kelly Barner and I published Procurement at a Crossroads: Career-Impacting Insights into a Rapidly Changing Industry with J. Ross Publishing, foreword by David Clevenger, with the central concern being how procurement professionals — humans — would navigate the technological transformations the industry was facing. Robert Handfield’s review for the NC State Supply Chain Resource Cooperative captured the book’s thesis: “the authors do a great job of exploring what change in procurement consists of, and how it is likely to unfold in the next five years.”
In January 2023, I returned to Levie’s framing to articulate the difference between digitization (faster automation of existing processes) and digitalization (transformation of what the processes are designed to accomplish). The argument was that most procurement organizations were investing in the first while believing they were doing the second.
In August 2025, when Levie published his multi-agent post on LinkedIn — “multiple agents in parallel to solve the same problem” — I documented the 95% alignment between his framing and the agent-based modeling work I had been publishing for over twenty years. The bonus coverage on that post read: “Your timing is impeccable, Aaron Levie — I have been waiting for this day for almost 25 years.”
Across my recent series on AI architecture — Atlan and the context layer, AI errors as design outputs, transactional smart-trigger engagement, and orchestration transparency — the underlying argument has been consistent. The distinction between optimizing what exists and building what should exist is not a directional preference. It is an operational requirement that is becoming legally consequential, faster than most organizations have realized.
That is why Levie’s quote resonates with me, but also why it requires extension. The quote names the destination. It does not name the validation architecture required to reach the destination defensibly. In 2014 the gap between the destination and the architecture was a strategic preference. In 2026, with the legal environment shifting toward treating AI outputs as design decisions, the gap is becoming a defensibility requirement.
What Happened to IoT — and What It Tells Us About AI
There is a specific cautionary pattern in the IoT history that the AI procurement conversation needs to absorb.
In 2014, Cisco IBSG forecast 50 billion connected devices by 2020. The number was repeated in keynote speeches, vendor pitches, analyst reports, and procurement-tech commentary for years. Companies built product roadmaps around it. Investment capital flowed into it. The commercial expectation was that IoT would be the defining infrastructure transformation of the 2010s.
The actual outcome was different. By 2020, the realized number was approximately 8 to 12 billion connected devices, depending on how you count. The transformation also fragmented in ways the forecast did not anticipate. The unified Internet of Things as a coherent architectural transformation never materialized. The phrase itself faded from procurement-tech commentary by approximately 2018 to 2020.
What did happen was that the underlying technologies got absorbed into other categories. The commercial machine moved on. Practitioners who had built consulting practices and career positions on IoT framing had to pivot. The institutional memory of what was lost in the transition is largely invisible because the survivors have rebranded their work into AI, automation, or digital transformation.
Here is what the history actually demonstrates. IoT had commercial sponsorship at scale. Cisco needed it. Intel needed it. IBM, GE, and Siemens needed it to justify their industrial transformation strategies. Even as the deployment numbers fell short of forecast, the institutional commitment to the framing kept it alive longer than the evidence supported.
IoTH had no equivalent commercial sponsorship. O’Reilly published the framing. I picked it up. A handful of academics engaged it. There was no Cisco-scale vendor whose business model required IoTH to succeed. The human-and-cooperation reframe was architecturally sharper than the technology-only framing, but it had no commercial machine driving its adoption. Without the machine, the framing did not propagate.
That is the structural pattern. Architectural sharpness does not produce market adoption. Commercial sponsorship produces market adoption. When the two diverge — when the architecturally sharper framing has no commercial machine and the architecturally weaker framing has billions of dollars behind it — the market follows the commercial machine, even into outcomes that the evidence does not support.
The same structural choice is being made again, in the AI procurement category, with substantially higher stakes. Smart practitioners are absorbing AI through the framing that the current commercial machine is producing — single-model wrappers, transactional engagement, opaque orchestration. The architecturally sharper framing — multi-model debate, collaborative engagement, orchestration transparency, human-agent logic — is logically available to anyone who looks carefully. The commercial machine is producing a different framing.
Twelve years ago, the cost of absorbing IoT without IoTH was a fragmented technology category and the quiet career pivots of people who had committed to the dominant framing. Twelve years from now, the cost of absorbing AI without orchestration transparency will be substantially higher, because courts and regulators are moving toward treating AI outputs as design decisions — which makes the architecturally sharper framing not just preferable but defensibility-required.
Where This Lands
The issue is not whether AI can generate an answer. It is whether you can validate how that answer was constructed before it becomes a decision.
Which models contributed. Where they disagreed. How those disagreements were resolved. What human-agent logic produced the final output.
Without that validation architecture, you are not building the future. You are accelerating the past with better technology.
Validation is not a step in the process. It is the condition that determines whether the process works at all.
That is what Levie’s quote actually requires once it is taken to its operational conclusion. “Build for how the world should work” is the headline. The validation architecture that makes the new world defensible is the work underneath the headline. Twenty-five years ago, the validation architecture I built for Canada’s Department of National Defence delivered 51% to 97.3% delivery performance improvement, sustained for seven years. Today, that same architectural lineage — refined through Hansen Fit Score™ (HFS™), Hansen Strand Commonality™, the Hansen Metaprise™, and now ARA™-driven RAM 2025™ — is the operational answer to a question that the AI procurement market has only recently developed the vocabulary to ask.
The smart practitioners encountering my orchestration-transparency post and reasoning from familiar frameworks are not wrong. They are early. The frameworks they have mastered are about to be displaced by the architectural shift the post describes, and the displacement is closer than most legal teams have been briefed.
In 2014, I wrote that the procurement world was on the brink of a transformational surge that most practitioners would not anticipate quickly enough. The IoTH framing existed. The architectural posture was named. The human-and-technology cooperation question was already in dated public form. The market followed the IoT framing instead, and twelve years later the cost of that choice is largely invisible because the survivors rebranded their work and moved on.
We are at the same juncture again. The architecturally sharper framing is named. The validation architecture exists. Whether the procurement-AI category will follow the commercial machine into the same fragmented outcome, or whether enough practitioners will recognize the structural choice in time to make a different one, is the question this post leaves with the reader.
Levie pointed at the destination. The validation architecture is what the destination requires.
That is the line across the finish line.
Phase 0™ is the pre-commitment diagnostic that surfaces the validation architecture before AI deployment. ARA™-driven RAM 2025™ is the reasoning architecture that produces orchestration audit trails for every output it generates. Both are commercially available through Hansen Models™. Details at hansenprocurement.com.
Jon W. Hansen is founder of Hansen Models™ and the Procurement Insights archive — 3,300+ published documents, zero vendor sponsorships, in continuous operation since 2007. The foundational work began in 1998 with SR&ED-funded research for Canada’s Department of National Defence.
Hansen Models™ | Phase 0™ | Hansen Fit Score™ (HFS™) | RAM 2025™ | ARA™ (Augmented Reasoning Architecture™) | Human Language Interface™ (HLI™) | Learning Loopback Process™ | Hansen Strand Commonality™ | Hansen Metaprise™ | Implementation Physics™
hansenprocurement.com | payhip.com/hansenmodels | calendly.com/jon-toq/30min
-30-
Taking Aaron Levie’s Famous Quote One Step Further — and Across the Finish Line
Posted on April 27, 2026
0
By Jon W. Hansen | Procurement Insights | April 2026
In 2013, Aaron Levie posted a line that has been quoted in technology circles ever since.
“Uber is a $3.5 billion lesson in building for how the world should work instead of optimizing for how the world does work.”
It is a good line. It captures something real about the difference between optimizing existing systems and building toward systems that do not yet exist. I have been citing it in my own work since April 2014 — applied first to the Internet of Things and Humans framing, then to digitization-versus-digitalization in 2023, and most recently to the multi-agent AI alignment I documented in August 2025. Twelve years. Three distinct technology moments. One consistent architectural posture.
The line still works. But after watching how thoughtful practitioners are responding to the orchestration-transparency argument I have been developing across my recent posts, I want to take the line one step further — across the finish line that Levie’s framing leaves open.
Here is the issue.
Most organizations are not building for how the world works. They are optimizing broken systems. Faster procurement workflows on top of unchanged organizational physics. AI agents bolted onto deployment patterns that were already failing before the agents arrived. Transformation initiatives that automate the dysfunction rather than address it.
But they are not building for how the world should work either. They are scaling untested assumptions, now faster, with AI. The smart-trigger transactional model I wrote about earlier this week is exactly this pattern. The deployment looks like progress because the technology is sophisticated, but the engagement architecture underneath has not changed, which means the failure modes have not changed either. They have just been accelerated.
Levie’s framing names the right destination. It does not name what is required to reach it.
What is required is the validation layer that connects ambition to operational reality.
Why This Has Become Unavoidable in 2026
In recent days I have been watching how sophisticated practitioners — accomplished attorneys, experienced ProcureTech founders, senior procurement leaders — are responding to the orchestration-transparency argument I published on April 26.
The pattern is striking. Smart, accomplished people are reading the post seriously and arriving at conclusions that operate inside familiar frameworks rather than engaging the architectural shift the post describes.
A contracts attorney reads the orchestration-transparency argument and reasons by analogy to note-taking apps and record-keeping decisions. The reasoning is correct inside the contracts-and-evidence framework they work in daily. It does not engage the structural distinction the argument actually makes — that the transcript itself is not the validation layer; the structured record of how the decision was composed is the validation layer.
A ProcureTech founder reads the same argument and treats the model black box and the orchestration black box as parallel concerns of equal weight. Technically defensible, inside the AI-capability framework. Strategically incomplete, because the post argues the two are categorically different — one solvable today, one not, with the solvable one being the layer regulators will require first.
Both responses are intelligent. Both miss the architectural point. And both miss it for the same reason — the practitioners are operating inside the rules and frameworks they have spent careers mastering, while the architectural argument is operating one layer up, in territory their disciplines have not yet developed vocabulary for.
That is what Levie’s quote actually points at. The world should work in ways that the rules and frameworks have not yet caught up to. The practitioners working effectively inside the current rules are not wrong about the rules. They are working effectively inside a frame that the architectural shift is in the process of displacing.
The Distinction the Levie Quote Cannot Carry On Its Own
“Build for how the world should work” is the directional claim. It is correct as far as it goes. What it does not specify is the operational difference between the old world and the new one in any given category — and in 2026, the AI procurement category is where that difference is becoming most consequential.
The old world: AI tools produce outputs, users consume them, accountability is anchored on the user’s interpretation of the output.
The new world: AI tools produce outputs through documented orchestration logic, users validate the orchestration as part of the decision process, accountability is anchored on the design of the orchestration architecture before deployment.
The old world: A black box is a black box. Either you trust the output or you don’t.
The new world: There are two black boxes. The model black box (intrinsic, unsolved, research-grade) and the orchestration black box (engineering, solvable today, regulatory-grade). The legal posture that “the AI made an error” is weakening. The legal posture that “we cannot tell you how the AI reached this decision” is structurally indefensible once orchestration transparency exists as an alternative.
The old world: AI deployment is a productivity question. Capability multiplier, output volume, time saved.
The new world: AI deployment is a design-output question. Pre-incident validation, orchestration audit trail, human-agent logic in the documented record.
Each of these distinctions was logically available to anyone looking at AI deployment carefully. None of them was named publicly until the architectural shift forced the naming. That is what “build for how the world should work” requires in operational form — the willingness to articulate the distinctions the old framework hides, before the new framework has institutional vocabulary to express them.
What This Has Meant Across My Own Archive
I want to be honest that this is not a new claim from me. It is the same architectural posture I have been holding in dated public form for over a decade, applied to whatever the current technology moment requires.
In April 2014, I wrote two pieces on the same day. The first, What IBM Was to Mainframes and Coupa Was to SaaS, Nipendo Is to the P2P Cloud, used the original Levie Uber quote in full to argue that the procurement world was on the brink of a transformational surge most practitioners would not anticipate quickly enough. The key, I wrote then, was “the ability to think beyond the familiar way of doing things, and embracing the prospects of what is actually possible. Even if said possibilities have not yet been fully recognized and contemplated.” That sentence is twelve years old. It is also a precise description of what happens when sophisticated practitioners encounter the orchestration-transparency argument today.
The second piece, published the same day, established a dedicated section of the Procurement Insights blog around the Internet of Things and Humans framing. The IoTH framing came from Tim O’Reilly’s April 2014 Forbes article and made an architecturally specific move: “so many of the most interesting applications of the Internet of Things involve new ways of thinking about how humans and things cooperate differently when the things get smarter.” That sentence — written twelve years ago, about a different technology — is the early articulation of what the orchestration-transparency post calls human-agent logic. The vocabulary has evolved. The architectural claim has not. What was named conceptually in 2014 as humans and things cooperating differently is now operationalized in ARA™-driven RAM 2025™ through the Human Language Interface™ (HLI™) — the natural-language layer through which human agents direct multi-model substrates without specialized prompts or learned syntax.
In 2016, Kelly Barner and I published Procurement at a Crossroads: Career-Impacting Insights into a Rapidly Changing Industry with J. Ross Publishing, foreword by David Clevenger, with the central concern being how procurement professionals — humans — would navigate the technological transformations the industry was facing. Robert Handfield’s review for the NC State Supply Chain Resource Cooperative captured the book’s thesis: “the authors do a great job of exploring what change in procurement consists of, and how it is likely to unfold in the next five years.”
In January 2023, I returned to Levie’s framing to articulate the difference between digitization (faster automation of existing processes) and digitalization (transformation of what the processes are designed to accomplish). The argument was that most procurement organizations were investing in the first while believing they were doing the second.
In August 2025, when Levie published his multi-agent post on LinkedIn — “multiple agents in parallel to solve the same problem” — I documented the 95% alignment between his framing and the agent-based modeling work I had been publishing for over twenty years. The bonus coverage on that post read: “Your timing is impeccable, Aaron Levie — I have been waiting for this day for almost 25 years.”
Across my recent series on AI architecture — Atlan and the context layer, AI errors as design outputs, transactional smart-trigger engagement, and orchestration transparency — the underlying argument has been consistent. The distinction between optimizing what exists and building what should exist is not a directional preference. It is an operational requirement that is becoming legally consequential, faster than most organizations have realized.
That is why Levie’s quote resonates with me, but also why it requires extension. The quote names the destination. It does not name the validation architecture required to reach the destination defensibly. In 2014 the gap between the destination and the architecture was a strategic preference. In 2026, with the legal environment shifting toward treating AI outputs as design decisions, the gap is becoming a defensibility requirement.
What Happened to IoT — and What It Tells Us About AI
There is a specific cautionary pattern in the IoT history that the AI procurement conversation needs to absorb.
In 2014, Cisco IBSG forecast 50 billion connected devices by 2020. The number was repeated in keynote speeches, vendor pitches, analyst reports, and procurement-tech commentary for years. Companies built product roadmaps around it. Investment capital flowed into it. The commercial expectation was that IoT would be the defining infrastructure transformation of the 2010s.
The actual outcome was different. By 2020, the realized number was approximately 8 to 12 billion connected devices, depending on how you count. The transformation also fragmented in ways the forecast did not anticipate. The unified Internet of Things as a coherent architectural transformation never materialized. The phrase itself faded from procurement-tech commentary by approximately 2018 to 2020.
What did happen was that the underlying technologies got absorbed into other categories. The commercial machine moved on. Practitioners who had built consulting practices and career positions on IoT framing had to pivot. The institutional memory of what was lost in the transition is largely invisible because the survivors have rebranded their work into AI, automation, or digital transformation.
Here is what the history actually demonstrates. IoT had commercial sponsorship at scale. Cisco needed it. Intel needed it. IBM, GE, and Siemens needed it to justify their industrial transformation strategies. Even as the deployment numbers fell short of forecast, the institutional commitment to the framing kept it alive longer than the evidence supported.
IoTH had no equivalent commercial sponsorship. O’Reilly published the framing. I picked it up. A handful of academics engaged it. There was no Cisco-scale vendor whose business model required IoTH to succeed. The human-and-cooperation reframe was architecturally sharper than the technology-only framing, but it had no commercial machine driving its adoption. Without the machine, the framing did not propagate.
That is the structural pattern. Architectural sharpness does not produce market adoption. Commercial sponsorship produces market adoption. When the two diverge — when the architecturally sharper framing has no commercial machine and the architecturally weaker framing has billions of dollars behind it — the market follows the commercial machine, even into outcomes that the evidence does not support.
The same structural choice is being made again, in the AI procurement category, with substantially higher stakes. Smart practitioners are absorbing AI through the framing that the current commercial machine is producing — single-model wrappers, transactional engagement, opaque orchestration. The architecturally sharper framing — multi-model debate, collaborative engagement, orchestration transparency, human-agent logic — is logically available to anyone who looks carefully. The commercial machine is producing a different framing.
Twelve years ago, the cost of absorbing IoT without IoTH was a fragmented technology category and the quiet career pivots of people who had committed to the dominant framing. Twelve years from now, the cost of absorbing AI without orchestration transparency will be substantially higher, because courts and regulators are moving toward treating AI outputs as design decisions — which makes the architecturally sharper framing not just preferable but defensibility-required.
Where This Lands
The issue is not whether AI can generate an answer. It is whether you can validate how that answer was constructed before it becomes a decision.
Which models contributed. Where they disagreed. How those disagreements were resolved. What human-agent logic produced the final output.
Without that validation architecture, you are not building the future. You are accelerating the past with better technology.
Validation is not a step in the process. It is the condition that determines whether the process works at all.
That is what Levie’s quote actually requires once it is taken to its operational conclusion. “Build for how the world should work” is the headline. The validation architecture that makes the new world defensible is the work underneath the headline. Twenty-five years ago, the validation architecture I built for Canada’s Department of National Defence delivered 51% to 97.3% delivery performance improvement, sustained for seven years. Today, that same architectural lineage — refined through Hansen Fit Score™ (HFS™), Hansen Strand Commonality™, the Hansen Metaprise™, and now ARA™-driven RAM 2025™ — is the operational answer to a question that the AI procurement market has only recently developed the vocabulary to ask.
The smart practitioners encountering my orchestration-transparency post and reasoning from familiar frameworks are not wrong. They are early. The frameworks they have mastered are about to be displaced by the architectural shift the post describes, and the displacement is closer than most legal teams have been briefed.
In 2014, I wrote that the procurement world was on the brink of a transformational surge that most practitioners would not anticipate quickly enough. The IoTH framing existed. The architectural posture was named. The human-and-technology cooperation question was already in dated public form. The market followed the IoT framing instead, and twelve years later the cost of that choice is largely invisible because the survivors rebranded their work and moved on.
We are at the same juncture again. The architecturally sharper framing is named. The validation architecture exists. Whether the procurement-AI category will follow the commercial machine into the same fragmented outcome, or whether enough practitioners will recognize the structural choice in time to make a different one, is the question this post leaves with the reader.
Levie pointed at the destination. The validation architecture is what the destination requires.
That is the line across the finish line.
Phase 0™ is the pre-commitment diagnostic that surfaces the validation architecture before AI deployment. ARA™-driven RAM 2025™ is the reasoning architecture that produces orchestration audit trails for every output it generates. Both are commercially available through Hansen Models™. Details at hansenprocurement.com.
Jon W. Hansen is founder of Hansen Models™ and the Procurement Insights archive — 3,300+ published documents, zero vendor sponsorships, in continuous operation since 2007. The foundational work began in 1998 with SR&ED-funded research for Canada’s Department of National Defence.
Hansen Models™ | Phase 0™ | Hansen Fit Score™ (HFS™) | RAM 2025™ | ARA™ (Augmented Reasoning Architecture™) | Human Language Interface™ (HLI™) | Learning Loopback Process™ | Hansen Strand Commonality™ | Hansen Metaprise™ | Implementation Physics™
hansenprocurement.com | payhip.com/hansenmodels | calendly.com/jon-toq/30min
-30-
Share this:
Related