Prompts Must Go
We've mistaken generative AI prompts as an innovative miracle. The reality is they are neither innovative nor miraculous.
Less than fifteen seconds into this article, and you’ve probably pegged me as a nut job1. But there may be good reason to ponder what I am about to clarify.
This is one of those slow-burning ideas that will take time to digest, but I assure you, the notion of a promptless future will hit you subtly with flashes of insight as you go about your daily work.
Many thanks to all the paying subscribers who support Impertinent. I’ve met some really interesting people on this journey—a path wouldn’t exist without your encouragement. Most 70+ year-olds would rather be fishing. I hate fishing, so thank you for helping to deflect that retirement misery.
Prompts Must Go?
Generative AI is a very new idea. It seems like a decade in progress has passed in the last 18 months.
ChatGPT (framed as a “product”) is a dumb idea. Even Sam Altman has intimated from time to time that it is the demo that went viral. It was never intended to be a product.
Demo’s-gone-viral is not an uncommon thing. Accidental business tactics have created some of the best products we use every day.
The impermanent glue that holds 3M’s Sticky Notes® was originally thought to be a failure until it was used to demonstrate the concept of temporary adhesives.
Danish carpenter OleKirk Kristiansen bought the Billund Woodworking and Carpentry shop in 1916. After World War I, Ole found it much harder to source the birchwood necessary for his wooden toys. Luckily, he was introduced to a British plastic injection-molding machine in 1946, which he would purchase to produce plastic automatic binding bricks — the precursor for the LEGO® bricks we know and love (and obsess over) today.
ChatGPT is one of those fortunate demos that opened a lot of eyes. Unlike a Sticky Note®, it stuck seemingly for good. But it’s a dead-end for many reasons that may become obvious as we explore a possible future without prompts. One might even ask—how is that even possible? We’ll get to that shortly.
Tired and Wired
Let’s list some of the pros and cons associated with the underlying nature of prompts.
Wired: Prompts make work magical.
Undoubtedly, if you get into a groove writing good prompts, it’s delightful and rewarding to command a bunch of GPUs to do your work. You are finally the master of your domain. You feel the power and control over information.
Tired: Prompts require an intellectual effort for an outcome with - as of now - variable quality. Humans don’t like to play this kind of game. (Alexandre Kantjas)
True. It’s not easy to create useful prompts. Vast energies are being applied by everyone using generative AI, and little (if any) of that energy is stored and reusable. Show of hands, how many of you like the multi-hour tussle of dialing in a prompt?
Wired: Prompts require an intellectual effort and that is a good thing because it forces humans to think (again). (Alexandre Kantjas)
True. But should humans spend their limited time thinking about how to generate things to think more deeply about? I’m on the fence with this advantage.
Tired: Indirectly, prompts give gen AI a bad name because new users experience terrible outcomes, give up, and then tell their colleagues to not waste their time with generative AI.
No debate. This is happening a lot. Nichole Leffer recently said it best -
“Most people quickly write off AI as "all hype" and "not ready for prime time" after a few bad experiences - whether because they used a garbage tool, tried to use a good tool with zero idea how to prompt, or listened to advice on prompting proliferated by self-proclaimed "AI Gurus" who know nothing about using AI beyond thinking jumping on the bandwagon can make them rich.” — Nichole Leffer
Tired: Prompts are a hidden tax levied on productivity without accountability. We cannot calculate net productivity gains, if there are gains at all.
Very little work has been done to determine how productive generative AI truly is. Most hypesters on LinkedIn claim 7X, 10X, and even 100X productivity gains. This is all bullshit, of course. Productivity is central to generative AI, but like all ROI calculations, we leave out many contra-economic components and herald nothing but net. Swoosh! You’re a 10X developer in the snap of your fingers. Please.
Wired and Tired: Prompts are ideally suited to expand the gap that no-code attempts to close. They serve as an anti-democratic wedge for knowledge.
This is a boon for consultants and hucksters struggling to carve out a niche in the shadow of no-code platforms that democratize data management. It also opens a gap between those who can “program” natural language prompts better than everyday workers.
Tired: Prompts are the central cause for security, privacy, and accuracy risks in gen AI. The closer humans get to the core of any technology, the greater the risks.
This seems to be truthful. Micheal Silva at Emely AI will quickly provide the harsh reality of prompting risks in ChatGPT. Examining the deeper cause for these risks is worthwhile.
Tools that wrap generative AI in a cloak of better usability are often hurried to market. They lack forethought and design choices that blend well with existing identity and security platforms.
Applications from start-ups are raced to market with one purpose - to acquire as many users as possible. We saw this with Web 1.0 and 2.0. SaaS platforms needed many years to backtrace their steps to establish formidable security. Many still are without privacy platforms.
In the early 80s, I was learning assembly programming. I was poking values into registers. I was the closest to the hardware that a developer could possibly get. One wrong move, and it’s over.
Modern languages make it possible to abstract developers away from the hardware. Libraries make this possible. Modern software applications provide additional abstractions that defend not only the machinery but also the data. Security abstractions add yet another layer of safety. As we pull back, we can observe that no-code platforms add yet additional layers that make it possible for democratized solution building.
The closer humans get to the core of any technology, the greater the risks.
These layers—built entirely with software—are designed to make computing safer, more productive, and resilient. These are the engineering tenets that make highly accurate business transactions possible.
And yet… we hand everyone the keys to an LLM.
As you sit in front of your ChatGPT screen pondering that next prompt, know that you are talking directly to the LLM. You and everyone else using ChatGPT are assembler-level generative AI programmers who are trying to conjure sensical outputs from a big blob of numbers.
Perhaps I’m over-sensationalizing this relationship between prompts and LLMs, but let’s be clear - you are fundamentally as close to the LLM as you can be without building the LLM itself, also made possible for nearly everyone in these early days of generative AI.
You’ve been given the ear of the LLM, and you can command it to do anything you want. Prompts are a direct and immediate pathway to the machinery known as the large language model.
And that’s the problem. It leads to:
Failures that are mistaken as hallucinations2
Misleading conclusions
Poor fitness-of-purpose for solutions
Compromised security and privacy
Compromised IP protection
As experienced ChatGPT users, I’m certain you could add to this list.
What’s the Remedy?
Software. History shows us why.
When Vulcan arrived in 1980, what came next? Base II, the first no-code database platform, became hyper-successful with software that framed its early personal database capabilities in a realm that was resilient and easier to use. Left in its natural state, it would not have become a near-billion-dollar company in the early 80s, a feat that, in today’s dollars, is one of the first software unicorns.
This pattern has existed since the late 70s. There’s no indication the pattern is changing, or that generative AI is exempt from this pattern.
Name any significant technological advance in computing over the past fifty years, and one thing transformed these advancements into pervasively useful products—software.
We will need more engineers doing what engineers have always done—build architectures that make this new generative AI capability thrive.
As I suggested early last year - generative AI has made possible apps that were inconceivable just 12 months ago.
It has unearthed visions of future solutions that were financially impractical until now. The explosion in software development will be fueled by a demand that is invisible today, but obvious in a few years.
As Kevin Xu said -
There are way more technologies that ought to be built that aren’t.
You can’t end prompting today. But you can stop assuming prompts have a bright future in the hands of users. They don’t.
AI itself will kill the need for direct LLM prompting.
Prepare for the Promptless Future
You might think you cannot do much to brace for this inevitable impact. I think otherwise. You can find ways to abstract yourself and your users from LLMs in many places.
Think copilot in everything you build.
Here’s a comprehensive list of possible pathways that may help:
Keep reading with a 7-day free trial
Subscribe to Impertinent to keep reading this post and get 7 days of free access to the full post archives.