Coda, Gemini, No-code, and GPT Store: Some Thoughts
In the aftermath of an AI project I recently completed using Gemini-Pro, I have some additional thoughts about apps being locked out of your best data used with AI and Coda's dependencies on OpenAI.
A while ago - 18 AI years to be exact - I penned this article warning Coda to consider a future where dependency on a single LLM or a generative AI provider, was risky. That was July, 2023. One AI year is about 22 days. And even with this relative equivalency, it is compressing at an astonishing rate. The pace of change suggests LLM agnosticism is on the horizon.
We’ve officially entered the era of LLM agnosticism, which suggests a significant shape-shift will occur for all digital work platforms. But there’s more…
Gemini comes in three flavors, and one (ironically named Nano) will be HUGE. Nano will make it possible to perform zero-latency generative AI inferencing on your watch or phone. OpenAI hasn’t leaned into mobile or inferencing at the edge. Google has consistently demonstrated a proclivity for tools and products that enable apps to run better off-piste.
What’s Coda’s mobile strategy for generative AI?
There isn’t one. Then there’s the deep integration between Vertex AI and FireStore, which will soon allow server-side Ultra inferencing to be synchronized with your watch in less than 100 ms. These and many more scenarios that will stun many of us will begin to play out in the spring of 2024.
The Google Gemini release last month is one of many predicted events that triggered me to write about the economics of LLM agnosticism. The gist…
When there was only one viable LLM for commercial use, this [OpenAI] was a reasonable choice. That ship has sailed and it’s not coming back. The landscape of LLMs with domain-specific advantages will only fracture more. Soon, there will be a vast landscape of options to choose from that help us accomplish our best AI work.
I’m not insensitive to the complexities and configuration issues that come along with this requirement, but I don’t see any other pathway for the future of Coda AI. It must allow no-code developers the choice of LLMs at a product, document, page, and component level.
A few AI years later, I urged all my clients and colleagues, including Coda itself, to pump the brakes on OpenAI dependencies. That was August 2023.
I’ve also suggested Coda AI become LLM-agnostic, again, mostly driven by performance and a little by inferencing costs. Now there may be another reason.
Sadly, I was right.
Last week, talks between the NY Times and OpenAI reached an impasse. Now, the lawyers are driving the bus. That’s a shame because I’m pretty sure OpenAI wants to compensate the Times.
In the aftermath of the Gemini [Pro] release, which is [reportedly] slightly less capable than GPT-3.5 Turbo, anyone who followed an all-in strategy with OpenAI is busy defending that decision by attacking Gemini as it exists today in Bard instead of looking ahead to the imminent availability of Gemini Ultra which is likely to be somewhat, perhaps much better than, GPT-4. This video is mind-blowing (heavily edited with many jump-cuts). Despite the obvious sleight-of-hand in the demo, Wired said…
Google’s Gemini Is the Real Start of the Generative AI Boom
I just finished an AI project that assessed Gemini-Pro, and while it was far better than PaLM-2, it fell just short of being a reliable GPT-3.5 Turbo equivalent. It was well-behaved for the most part but a little slower. This is a beta release and is likely to improve quickly.
Back to the point of this post -
Microsoft and Google are sitting on our most valuable data and we desperately want to use it in our tools.
My Gemini-Ultra Release Prediction
Keep reading with a 7-day free trial
Subscribe to Impertinent to keep reading this post and get 7 days of free access to the full post archives.