We Still Don't Get It: Prompts are Requirements, Not Queries
Almost 100 million ChatGPT users believe prompts are like search queries. Perhaps 10 million or so believe otherwise.
In the vast digital landscape, nearly 100 million ChatGPT users believe prompts are not unlike search queries. This misconception is rooted in the essence of Internet search. Since the advent of the Internet, we have leaned on a primary pathway that connects us to information — search.
Search has always been a bucket of millions of shades of paint. It has biases, inaccuracies, and latent indices that make it increasingly difficult to find that perfect hue. But, somehow, most of us learned to deal with it. Internet search has caused us to develop muscle memory for our queries.
This dependency on search has led many users to perceive ChatGPT as simply the "smarter search engine". This perspective is fundamentally flawed and perpetuated by ill-informed media and AI pundits.
Comparing and contrasting two entirely different technologies is deeply misleading. This comparison should not exist. Here’s why…
Search indices are designed to be a reflection of actual data. LLMs are intended to be a reflection of relationships between actual data.
Further complicating this morass of poor reporting and consumer misunderstanding is that queries and prompts are very different animals.
Keep reading with a 7-day free trial
Subscribe to Impertinent to keep reading this post and get 7 days of free access to the full post archives.