Is the AI Craze Over?
Why using LLM's to cut costs in SaaS may not have a future.
Take Note: This newsletter article is more of a preamble. It is intended primarily for other software developers, regardless of their industry or specific vertical. In other words, people from B2C and gaming communities are welcome to peruse its content. It will not address the fiscal problems inherent in deploying AI for any type of operations work at mid-sized or enterprise-grade SaaS companies. That post is coming next week.
On Thursday, July 10 of this year I was eating dinner at a local restaurant here on the Oregon Coast. I was sitting near the bar and noticed that the name and what was presumably the year that they opened — 1992 — had been written in concrete.
It made me think about how much has changed since the year 1992, back when I was a sophomore in high school. I am guessing that fish, pasta, and potatoes for french fries are still bought and sold in the restaurant industry much the same way today as they were more than 30 years past.
But back then, the sphere I have spent my entire career in — the world of Internet and Software as a Service (SaaS) — literally did not yet exist. We had America Online and Usenet. That was all.
I am going to make a safe bet * and say that 24 months from now, the restaurant pictured above (Kyllo’s, located in Lincoln City) will still be open. What supports this prediction? Their track record. That particular establishment has been around longer than most of the best known technology brands owned by publicly traded companies today. Apple (AAPL) predates them, it is true. But not Facebook (META) or Google (GOOG/GOOGL). Note that I’m not making any statements about whether any of the five companies mentioned in this paragraph are profitable, or whether they will need to take on debt in order to cover their expenses. I’m certainly not saying that layoffs or occasional firings with cause could not happen. Miscrosoft (MSFT), you know what I’m talking about. Plus you know what they say about chefs, and other service employees! They can be temperamental.
“The World of AI”
Keep in mind that I’m not familiar with every LLM toolset currently available. I maintain accounts with Anthropic and OpenAI. I also tried out DeepSeek once, at the request of a former collaborator with expertise in cybersecurity. And because I battle with severe hand pain from Repetitive Strain Injury, I use Google’s AI on an almost daily basis, mostly from my phone. Are they calling it Gemini these days? I forget.
1. The Perils of AI Agents
What exactly is an AI “agent?” That is a question for the linguists.
Colloquially, I take the term to mean any AI system that acts similarly to a human being, whether in the capacity of executing software commands or by interacting with humans through voice, text chat, and video protocols. META’s Code Llama, which is a Large Language Model (LLM) series formulated specifically to be able to run on some personal computers, can probably be deployed in this capacity. I haven’t yet tried it myself.
I’ve had the most experience interacting with this class of agents in a customer service capacity. Uh-huh, yeah. You can talk to them on the phone, or even a videoconference. The odds seem to be getting lower each day that you will chat with an actual human being in a remote customer service setting. Companies like Salesforce (CRM) pushed out AI as a replacement for humans in these types of jobs with a vengeance.
That’s old news. I’m not sure how many people really mind. Or care. Having to call technical support or ask for a refund or help navigating a difficult process is always a pain. I’ve thought about setting up a poll to ask people how they feel about the use of AI agents for customer service. Sort of a sequel to the survey I ran last May.
The problem with probabilistic/stochastic AI agents [this doesn’t include completely scripted chatbots — which often are not even considered NLP — but it certainly does include many recent entrants to the field, even “AI Scheduling Assistants” that operate by telephone] is that they create unpredictable failures.
You can probably chalk the Crowdstrike (CRWD) fiasco of July 2024 up to some related projects. I don’t work there and I don’t have any recent formal training in Windows cybersecurity so I could not tell you for sure. All I can say with certainty is that their AI security project, codenamed “Falcon,” was responsible for the failure.
2. Use of AI Tools in Generative Coding
I don’t have time to express my feelings on this subject in depth. Suffice it to say that there are probably many actual human, employable programmers in our industry named “Devin” and I might accidentally hurt their feelings!
But a set of mistaken assumptions about AI generative coding that I encountered on the final day of a recent open source conference held in Portland, Oregon bears correction. I need to make completely clear that these views were expressed during a set of open discussion sessions referred to as the “Unconference.” I don’t know the identity of the individual who wasted valuable time during our AI Safety panel, but he is certainly welcome to take me out to lunch in two years — if he remembers my name from the badge I was wearing.
The main point I need to express is that code that is not modular will nearly always be bad code. That is to say, programs need to be broken down into manageable, “bite-sized” chunks in order to be tested properly (no pun or reference to the Lotus Petal Architecture codebase intended).
If you write twisty, labyrinthine code and it fills up pages and pages of printouts…
Well, what can I say?
Maybe you are using a dot matrix printer from a bygone era.
Maybe that’s because you live and work in a foreign country with infrastructure and equipment shortages. But you are not part of the Internet economy as I know it.
I worked extensively with Anthropic Claude Sonnet last summer and it was capable of generating testable, modular output in multiple programming languages, within the context window provided. That is not to say it did not make mistakes.
On the whole its output was comparable to most of the human programmers I have worked with as a product/project manager, and while I was CEO of Yes Exactly, Inc. They also made mistakes, but not many of them.
What can I say? I hired and chose talented people as cofounders.
—
* What are the stakes of this bet? Anyone who gets it right and is a software developer can take me out to lunch at Kyllo’s. That’s right. Lunch, in broad daylight. Two years from now. August 19, 2027. Provided we both feel like it, the winner has sufficient means to cover the bill, and we both agree that it’s logistically feasible. Other Williams alumni are welcome to participate, but I do not give them preferential treatment. The dudes who witnessed the signing of my contract with my angel investor this past month at a different Lincoln City establishment (not waterfront) would be welcome to participate, but I don’t know if they are programmers. Offer not valid in the state of New Jersey, or any other state with a large and well-organized OTB industry.
- - -
DISCLAIMER: Please note that if you are a subscriber to this free Substack newsletter you may see the information that follows twice. Lotus Rose is not a registered investment, legal or tax advisor or a broker/dealer. I do not accept paid advertising or in-kind gifts from businesses, organizations, or service providers in exchange for coverage, endorsements, or positive reviews. All investment and financial opinions expressed in this newsletter are from the personal research and experience of the owner of this newsletter and are intended for educational and informational purposes only. Although every effort is made to ensure that all information is accurate and up-to-date, occasional unintentional errors may occur.



