Impertinent

Impertinent

Share this post

Impertinent
Impertinent
Truth in AI
Copy link
Facebook
Email
Notes
More
All Things AI

Truth in AI

Another Round of Gemini Drama Proves We Expect a Modicum of Truth

Bill french's avatar
Bill french
Feb 28, 2024
∙ Paid

Share this post

Impertinent
Impertinent
Truth in AI
Copy link
Facebook
Email
Notes
More
1
Share

Just uttering this title makes me puke a little in my mouth. Generative AI is not intended to provide truthful results. To the contrary, its real intent is to be generative, which is to say that it should lean into whatever context we decide to coax it.

Mistakenly, the masses believe ChatGPT is the next generation of search. Having digested thousands of LinkedIn and X posts on this topic, I can understand how they got this false impression.

Large language models (LLMs) are not big balls of factual information. You might be able to find some factual nuggets depending on your prompt style, but they are really an entirely different beast as compared to a search index or a database of known artifacts.

Nevertheless, we expect generative AI systems to give us truth to some degree.

Thank you for reading Impertinent. I’m no marketing genius, so I can use help if you know anyone who might enjoy my brand of information.

Share

If you asked a scientist what color the sky is and he ran off on some journey that debates the nature of color while never addressing the question, we would think this person is incompetent. The same applies to Gemini-Advanced, and ChatGPT.

In generative AI, we expect a level of truthfulness, especially concerning basic ideas and concepts that possess well-known and accepted truths. If it can’t base its answers on universal truths, how are we to trust it with more complex tasks that may be fundamentally dependent on other known facts?

An LLM, while designed to be creative, must do so with a modicum of truth.

Google stepped in some real doggie doo last week with a flurry of examples that demonstrated Gemini has been trained to disregard certain truths utterly.

X - “Founding Fathers”

Speculations vary concerning this misstep, but it was obviously intentional because it was not related to any one image task or limited to images. This problem created a massive backlash across the Interwebs.

X - Backlash

For the better part of the 2000s, Google accused SEOists of gaming their search engine algorithms. To this day, SEO practitioners attempt this mostly futile practice. The consequences of trying to game search can be expensive.

I've always maintained -- since before 2002 -- that the best way to play the SEO game is to spend all your SEO dollars on better content.

This week, the shoe is on the other foot. Google has openly gamed Gemini in ways that can be minimally annoying and maximally viewed as lying.

Will there be consequences?

Gemini’s inability to face facts was evident across all aspects of the product. It was seen by many - perhaps most - that it was incompetent. And the investors reacted negatively to this news.

Forbes - Google’s Gemini Headaches Spur $90 Billion Selloff

Let’s set this Google calamity aside and discuss the generative AI practicalities that probably matter most.

All AI systems are based on probabilities.

We are more satisfied as generative AI outputs get closer to 0.99999, or whatever we interpret as near-perfect.

Keep reading with a 7-day free trial

Subscribe to Impertinent to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Bill French
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More