“Thank you for the warm welcome. I’m glad to connect and be part of a community with such distinguished industry thinkers and leaders. I find your sober view on AI in supply chain supported by real-life data very valuable.” – Zavan, (my newest community member)
The above is one of the frequent comments I receive on a daily basis. As I wrote back to Zavan, “We all learn individually and progress collectively when there is an open dialogue that doesn’t focus on getting consensus but on gaining greater understanding.”
In short, we must peel back the layers of industry speak and sales and marketing jargon and question the present GenAI narrative. Real dialogue doesn’t focus on being right but on getting it rightβand we need a lot more of it.
With this end goal in mind, discussions or debates with Gen AI champions like Google’s Patrick Marlow are an important part of the shared learning process that leads to great communication and collaborationβsomething that seems to be absent in the Gen AI market.
Strategic Advisor/Analyst Specializing in Emerging AI Tech, Sales and Marketing (Procurement) Thinkers360 Top 50 Global Thought Leaders & Influencers on Procurement! (April 2021)
Patrick Marlow, what are your thoughts on why these tech-based initiatives have failed over the past several decades – now including Generative AI?
GenAI Agents | AI Incubator | LLMs | LangChain | Public Speaker
Jon W. Hansen it’s simple really. People and Education.
The technologies are more than capable and almost never the true point of failure for the projects. It’s always something like: – “we didn’t have enough / good data” – “we had scope creep” – “we were told it could do X but it cannot”
It’s much easier to blame technology for failing rather than blame ourselves for not knowing / understanding “I’m an expert at Supply Chain, therefore this tech must be the issue…” How human of us!
Imagine buying a new Corvette, running it into the lake, then blaming Chevy for it not being able to float. π Replace Corvette/Chevy with LLM/any model provider, and that’s exactly what is happening across the industry.
People are attempting to deliver projects with tech they haven’t taken the time to truly understand. And to be fair, the tech has been advancing so incredibly fast it is hard to stay on top of all the latest features.
β¨ But let me be ultra clear when I say this: β¨ There are many clear, impactful use cases for both GenAI and traditional AI/ML today.
If you’re one of those people that says “GenAI and AI/ML technology sucks and offers no benefits…” I say to you:
Strategic Advisor/Analyst Specializing in Emerging AI Tech, Sales and Marketing (Procurement) Thinkers360 Top 50 Global Thought Leaders & Influencers on Procurement! (April 2021)
Having worked in high tech for more than 40 years and procurement for almost as long, I can say with certainty that tech itself has not been the problem over the decades, from ERPs to SaaS, digital transformation, and now AI.
The problem is that there is a continuing insistence on leading with technology, using an equation-based model, e.g., technology-process-people, rather than an agent-based model, people-process-technology.
When the Canadian government SR&ED program funded my research into using self-learning algorithms within a nascent AI framework in the late 1990s, I used an agent-based model in which the technology was the last piece of the puzzle. By the way, the basis for the funding was converting my theory of strand commonality into a practical web-based procurement platform to support the Dept of National Defence IT infrastructure.
Suffice it to say it worked extremely well for the DND and, subsequently, the NYCTA.
My issue is why solution providers lead with tech and take on clients looking for a silver-bullet solution doomed to fail.
If you haven’t succeeded with AI/ML, why would you expect any different results from GenAI?
Patrick Marlow has someone who used to program using dBase II and saw the maturation of technology from the CP/M Kaypro to the power systems we have today; I can say with great confidence that the tech itself has advanced considerably and is more stable than ever.
By the way, and I say this a lot, 15 years from now, we will look back at today’s tech with the same dismissive attitude we now have for floppy drives, 2800bpms modems, and 10GB HDDs.
Within the above context, I wrote a paper in the early 2000s stating that technology is irrelevant to initiative success if you use an equation-based model.
So, here is the question I posed to Stephany Lapierre in this discussion stream: How would GenAI deliver equal or better results than what I achieved in the late 1990sβearly 2000s? That is the true measurement of progress and initiative success.
What we did in the late 1990s using an agent-based model and self-learning algorithms would succeed with almost any solution provider’s platform today.
To use a car analogy, while a VW Beetle and a Lamborghini are technically both cars, if I cannot drive the Beetle, I sure as hell won’t be able to drive the Lamborghini.
GenAI Agents | AI Incubator | LLMs | LangChain | Public Speaker
> My issue is why solution providers lead with tech and take on clients… > If you haven’t succeeded with AI/ML, why would you expect any different results…
Easy again. People are lazy. π
We live in the instant information and gratification era. People think software projects work the same.
They found out the hard way with AI/ML that you actually need a lot of data and expertise to make it worth your while.
But with GenAI, the technology became much more accessible and democratized. What used to take a team of expert ML engineers months to accomplish can now be done with the same quality / fidelity by a novice over the weekend. (Not in all cases of course…generalizing here…)
That’s progress! So when clients and consultants see that they think “I want that shiny thing with little to no effort too! They did it, so can I!”
But that goes back to my original comment about education. You gotta know when and where you can leverage the tech to get the gains you want to achieve.
Re: your question to Stephany, as the SME that put in the work for that project, you should really be asking yourself where you could make optimizations with newer tech.
But also, maybe it’s not a relevant question at all. You know “if it ain’t broken, don’t fix it!”
> That is the true measurement of progress and initiative success.
Nah, that is a measure of success through your lens and for your use case. Companies measure success and progress in a myriad of ways. My use case success doesn’t have to be what you view as success and vice versa.
Idk anything about your work honestly, but I bet a few minutes chatting about it with Gemini or Claude would give you some great insights.
Strategic Advisor/Analyst Specializing in Emerging AI Tech, Sales and Marketing (Procurement) Thinkers360 Top 50 Global Thought Leaders & Influencers on Procurement! (April 2021)
I completely agree with the differences in use cases. What made the DND and the NYCTA successful was not the technology but the use of an agent-based model.
Yes, some of the tech’s core foundations, e.g., strand commonality and expanding self-learning algorithms to incorporate capabilities such as time-zone polling of SSL sites and geographic distance from the intended delivery point to meet SLA contract terms, were used for the NYCTA.
I also added simultaneous engagement with couriers like UPS and priority customs clearance at the border. Along with the PO, the necessary customs clearance paperwork was electronically sent to the supplier while forwarding the dispatch request to the selected courier. All the supplier had to do was pick and pack.
I am not even touching on using the Parts Compression Function to reduce inventory levels substantially.
My point is that within a Metaprise framework – now called orchestration- the agent-based approach does provide certain development modularity without disturbing the central platform core. If I remember correctly, 25 years ago, IBM called it the “building block” approach.
How does GenAI improve on the above?
Patrick Marlow: To be clear, agent-based modeling is not a “build once – use many” approach. The solution’s application has a certain fluidity that seamlessly adapts to different environments. In short, the SLA for the DND was in many ways different from the NYCTA’s. That, of course, is where the Metaprise model comes into play.
As I have frequently said, I am sorry I sold my company and the patent for $12 million in 2001. I would love to get back in the game.
Anyway, going back to my original question, knowing what I know about Generative AI, coupled with the high rate of initiative failures, I don’t see how it could improve the results from what was achieved in the late 1990s or early 2000s.
For example – and these are hard numbers – how could it improve on the following:
– 51% SLA performance improvement to 97.3% – COG reduction maintained at 23% for several consecutive years – FTE reduction from 23 to 3
You surely have some hard numbers like the above.
As it stands right now, I am starting to see Generative AI in the same light as Python for Excel spreadsheets. However, I am open to receiving hard number results and greater clarity.
Closing Note: Patrick Marlow, lead engineer at Google, and Stephany Lapierre have scheduled a webinar to discuss the need for quality data to achieve results and ROI from using Gen AI. It should be an interesting discussion, and hopefully one that delivers more than the usual industry narrative.
An important exchange on AI with Google’s AI champion
Posted on August 11, 2024
0
“Thank you for the warm welcome. I’m glad to connect and be part of a community with such distinguished industry thinkers and leaders. I find your sober view on AI in supply chain supported by real-life data very valuable.” – Zavan, (my newest community member)
The above is one of the frequent comments I receive on a daily basis. As I wrote back to Zavan, “We all learn individually and progress collectively when there is an open dialogue that doesn’t focus on getting consensus but on gaining greater understanding.”
In short, we must peel back the layers of industry speak and sales and marketing jargon and question the present GenAI narrative. Real dialogue doesn’t focus on being right but on getting it rightβand we need a lot more of it.
With this end goal in mind, discussions or debates with Gen AI champions like Google’s Patrick Marlow are an important part of the shared learning process that leads to great communication and collaborationβsomething that seems to be absent in the Gen AI market.
Jon W. Hansen
Strategic Advisor/Analyst Specializing in Emerging AI Tech, Sales and Marketing (Procurement) Thinkers360 Top 50 Global Thought Leaders & Influencers on Procurement! (April 2021)
Patrick Marlow, what are your thoughts on why these tech-based initiatives have failed over the past several decades – now including Generative AI?
Patrick Marlow
GenAI Agents | AI Incubator | LLMs | LangChain | Public Speaker
Jon W. Hansen it’s simple really.
People and Education.
The technologies are more than capable and almost never the true point of failure for the projects.
It’s always something like:
– “we didn’t have enough / good data”
– “we had scope creep”
– “we were told it could do X but it cannot”
It’s much easier to blame technology for failing rather than blame ourselves for not knowing / understanding
“I’m an expert at Supply Chain, therefore this tech must be the issue…”
How human of us!
Imagine buying a new Corvette, running it into the lake, then blaming Chevy for it not being able to float. π
Replace Corvette/Chevy with LLM/any model provider, and that’s exactly what is happening across the industry.
People are attempting to deliver projects with tech they haven’t taken the time to truly understand.
And to be fair, the tech has been advancing so incredibly fast it is hard to stay on top of all the latest features.
β¨ But let me be ultra clear when I say this: β¨
There are many clear, impactful use cases for both GenAI and traditional AI/ML today.
If you’re one of those people that says “GenAI and AI/ML technology sucks and offers no benefits…” I say to you:
Stop driving your Corvette into the lake. π
Jon W. Hansen
Strategic Advisor/Analyst Specializing in Emerging AI Tech, Sales and Marketing (Procurement) Thinkers360 Top 50 Global Thought Leaders & Influencers on Procurement! (April 2021)
Patrick Marlow, I like the ‘Vette analogy.
Having worked in high tech for more than 40 years and procurement for almost as long, I can say with certainty that tech itself has not been the problem over the decades, from ERPs to SaaS, digital transformation, and now AI.
The problem is that there is a continuing insistence on leading with technology, using an equation-based model, e.g., technology-process-people, rather than an agent-based model, people-process-technology.
When the Canadian government SR&ED program funded my research into using self-learning algorithms within a nascent AI framework in the late 1990s, I used an agent-based model in which the technology was the last piece of the puzzle. By the way, the basis for the funding was converting my theory of strand commonality into a practical web-based procurement platform to support the Dept of National Defence IT infrastructure.
Suffice it to say it worked extremely well for the DND and, subsequently, the NYCTA.
My issue is why solution providers lead with tech and take on clients looking for a silver-bullet solution doomed to fail.
If you haven’t succeeded with AI/ML, why would you expect any different results from GenAI?
Link – https://bit.ly/3FBnFRr
Patrick Marlow has someone who used to program using dBase II and saw the maturation of technology from the CP/M Kaypro to the power systems we have today; I can say with great confidence that the tech itself has advanced considerably and is more stable than ever.
By the way, and I say this a lot, 15 years from now, we will look back at today’s tech with the same dismissive attitude we now have for floppy drives, 2800bpms modems, and 10GB HDDs.
Within the above context, I wrote a paper in the early 2000s stating that technology is irrelevant to initiative success if you use an equation-based model.
So, here is the question I posed to Stephany Lapierre in this discussion stream: How would GenAI deliver equal or better results than what I achieved in the late 1990sβearly 2000s? That is the true measurement of progress and initiative success.
Link – https://bit.ly/3oe5Vql
What we did in the late 1990s using an agent-based model and self-learning algorithms would succeed with almost any solution provider’s platform today.
To use a car analogy, while a VW Beetle and a Lamborghini are technically both cars, if I cannot drive the Beetle, I sure as hell won’t be able to drive the Lamborghini.
Patrick Marlow
GenAI Agents | AI Incubator | LLMs | LangChain | Public Speaker
> My issue is why solution providers lead with tech and take on clients…
> If you haven’t succeeded with AI/ML, why would you expect any different results…
Easy again.
People are lazy. π
We live in the instant information and gratification era. People think software projects work the same.
They found out the hard way with AI/ML that you actually need a lot of data and expertise to make it worth your while.
But with GenAI, the technology became much more accessible and democratized.
What used to take a team of expert ML engineers months to accomplish can now be done with the same quality / fidelity by a novice over the weekend. (Not in all cases of course…generalizing here…)
That’s progress!
So when clients and consultants see that they think “I want that shiny thing with little to no effort too! They did it, so can I!”
But that goes back to my original comment about education. You gotta know when and where you can leverage the tech to get the gains you want to achieve.
Re: your question to Stephany, as the SME that put in the work for that project, you should really be asking yourself where you could make optimizations with newer tech.
But also, maybe it’s not a relevant question at all.
You know “if it ain’t broken, don’t fix it!”
> That is the true measurement of progress and initiative success.
Nah, that is a measure of success through your lens and for your use case.
Companies measure success and progress in a myriad of ways.
My use case success doesn’t have to be what you view as success and vice versa.
Idk anything about your work honestly, but I bet a few minutes chatting about it with Gemini or Claude would give you some great insights.
Jon W. Hansen
Strategic Advisor/Analyst Specializing in Emerging AI Tech, Sales and Marketing (Procurement) Thinkers360 Top 50 Global Thought Leaders & Influencers on Procurement! (April 2021)
Patrick Marlow, I am always open to talking more.
I completely agree with the differences in use cases. What made the DND and the NYCTA successful was not the technology but the use of an agent-based model.
Yes, some of the tech’s core foundations, e.g., strand commonality and expanding self-learning algorithms to incorporate capabilities such as time-zone polling of SSL sites and geographic distance from the intended delivery point to meet SLA contract terms, were used for the NYCTA.
I also added simultaneous engagement with couriers like UPS and priority customs clearance at the border. Along with the PO, the necessary customs clearance paperwork was electronically sent to the supplier while forwarding the dispatch request to the selected courier. All the supplier had to do was pick and pack.
I am not even touching on using the Parts Compression Function to reduce inventory levels substantially.
My point is that within a Metaprise framework – now called orchestration- the agent-based approach does provide certain development modularity without disturbing the central platform core. If I remember correctly, 25 years ago, IBM called it the “building block” approach.
How does GenAI improve on the above?
Patrick Marlow: To be clear, agent-based modeling is not a “build once – use many” approach. The solution’s application has a certain fluidity that seamlessly adapts to different environments. In short, the SLA for the DND was in many ways different from the NYCTA’s. That, of course, is where the Metaprise model comes into play.
As I have frequently said, I am sorry I sold my company and the patent for $12 million in 2001. I would love to get back in the game.
Anyway, going back to my original question, knowing what I know about Generative AI, coupled with the high rate of initiative failures, I don’t see how it could improve the results from what was achieved in the late 1990s or early 2000s.
For example – and these are hard numbers – how could it improve on the following:
– 51% SLA performance improvement to 97.3%
– COG reduction maintained at 23% for several consecutive years
– FTE reduction from 23 to 3
You surely have some hard numbers like the above.
As it stands right now, I am starting to see Generative AI in the same light as Python for Excel spreadsheets. However, I am open to receiving hard number results and greater clarity.
Closing Note: Patrick Marlow, lead engineer at Google, and Stephany Lapierre have scheduled a webinar to discuss the need for quality data to achieve results and ROI from using Gen AI. It should be an interesting discussion, and hopefully one that delivers more than the usual industry narrative.
Sign up here! https://hubs.la/Q02KzfqW0
30
Share this:
Related