We [Apparently] Fear Opaque AI
The Rack-and-Pinion Steering of Artificial Intelligence
ChatGPT was arguably the first openly visible and directly accessible artificial intelligence app to close the gap between your fingers and AI.
Remember rack-and-pinion steering? ChatGPT is metaphorically very similar to this automobile engineering marvel. It transforms the steering wheel into a sensor that provides tactile feedback and shortens the time required for your brain to make rapid adjustments.
The result is a very pleasurable drive because the vehicle goes precisely where you want it to. It lets you feel the road. You have greater control, and with more control comes the confidence to go faster, turn sharper, and move closer to the apex of the corner.
AGI apps like Bard and ChatGPT have created a straight(er) line between what is known and your context and interest. And it does it expediently and with far greater control1. To be clear, the control I’m referring to is the avoidance of all the ads and distractions of a search engine. Hallucinations are what we fear the most, and the possibility that AI could run off and destroy mankind2.
I’ve often argued that hallucinations are not a thing. Crappy prompts are, and they tend to cause hallucinations. If you give an LLM the latitude to be creative, it will jump at the chance to show you how good it is at fabricating its reality. It’s what they were designed to do. Hallucinations are not a flaw.
But what if your rack-and-pinion steering, without warning, sends you off a cliff? This is where the metaphor leaves the road3 but with good reason. AGI apps like ChatGPT lack the precise equivalent of a steering wheel. More on that below.
We generally don’t realize that AI already exists everywhere. It’s been around for decades, and we use it all the time. The hidden side of AI is vast.
This post wasn’t possible without AI running somewhere. The salad you ate yesterday made it to your plate with no less than fifty AI inferences. Your inbox was teeming with important messages this morning because AI running on a thousand servers defended it from a few hundred million attacks every hour. The last time you had a medical procedure or a test caused dozens of AI processes to spin up and provide life-sustaining data and automations.
Life on this planet profoundly depends on millions of AI inferences every minute of every day.
We’ve been using AI for the last decade, but we don’t call it AI or think about it. It’s invisible and works in the background like this robot that magically maintains its balance in disbelief.
Many AI processes are needed to make this robotics prototype possible. We look at the robot and admire it for what it can do. We don’t think about the underlying AI or the possibility it could turn on us and be used for nefarious purposes. Robots, in general, already have a PR problem because Hollywood created it.
Your life depends on many equally hidden but often more primitive AI processes. Some have been helping you for more than a few decades. For example, the one that reminds you when your seatbelt is not fastened, and the car is moving. Or, that Amazon recommendation that helped you discover your air fryer. The MRI your grandmother needed to save her life had a ton of AI. Did you question the efficacy of the data? Probably not. Indeed, the doctors didn’t. They used the MRI data without a second thought because it’s a thoroughly tested copilot designed with specific objectives.
Microsoft Office will soon perform dozens of inferences for every aspect of the product. Its various copilots will help you do your work better and faster, and you’ll be thrilled because you might get home for dinner occasionally.
We’ve gone decades without fear of AI.
An AI Chatbot demo that is overtly visible to users and intended only to show how all humans can use AGI, and we panic.
Keep reading with a 7-day free trial
Subscribe to Impertinent to keep reading this post and get 7 days of free access to the full post archives.