The Real Problem With GenAI – Too Many Arrows!

Posted on August 16, 2024

0


To start, thank you, Dr. Tony Bridger. Your original arrow graphic above illustrates why technology implemented under an equation-based model doesn’t work. It also made me wonder how GenAI could improve the results of my self-learning algorithm platform, which I will share later in this post. I have asked solution providers, and GenAI advocates this question many times, but I have yet to receive an answer beyond conceptual generalities and vague percentages.

The first arrow is the key. When I developed my platform to support the acquisition of MRO parts for the DND, I utilized self-learning algorithms within the framework of an agent-based model using a Metaprise infrastructure. The terminology has been jazzed up today to “intake” and “orchestration.” I still am not sure why – maybe Metaprise wasn’t exciting enough. 😉

Based on my theory of strand commonality, which recognizes that related attributes exist within seemingly disparate data streams and that these attributes collectively impact achieving the desired outcome, the self-learning algorithms would incorporate these attributes to select, engage, and manage the S2P process.

In short, this is where human experience and expertise come into play. You can incorporate an unlimited number of attributes into the algorithm through an agent-based model. For example, when I analyzed the DND, I considered agents within and outside the buying group, including the time of day an order was received, internal and external stakeholder policies, processes, user capabilities, geographic location, etc. I then connected the internal and external stakeholders within the Metaprise framework on a real-time basis, which included suppliers and shippers – I worked with UPS to create a coordinated dispatch capability in which the supplier would receive the PO, pre-filled waybill and customs documentation while simultaneously dispatching UPS to pick-up at the designated site.

However, the really exciting part is that I created a historical track and a real-time track within the algorithms, which would update and learn with each transaction, providing a weighted score of things like past delivery performance and quality and real-time pricing, time of day, geographic location of the supplier and delivery requirements.

The best part was that the authorized buyer—not an analyst, just a regular buyer—could change the supplier response ranking by changing the weighted importance of either SLA delivery time or product cost.

The end result was as follows:

  • Increased next-day delivery performance from 51% to 97.3%
  • Delivered a 23% cost of goods savings year-over-year for several consecutive years
  • Reduced combined FTE buyer headcount from 23 to 3 within 18 months

The more transactions processed, the smarter the system became.

I will have to go through my archives to find the patent documents in which the process was mapped in detail.

My point is that we didn’t require the subsequent three arrows to produce a real-world outcome for the end client. Nor did the client require data analysts or special training beyond the user-friendly dashboard to realize practical, real-world benefits that could be measured.

One day, I will tell you how I created the time and distance zone polling for the NYCTA SSL Sites.

30

Posted in: Commentary