AI Must Think Before IT Speaks, But Sometimes it Shouldn't Speak At All
Have you ever wished your LLM would share it's thoughts in real-time so you could also guide it in real-time?
I’ve repeatedly observed that while getting good at thinking before they speak, large language models should also be able to think without saying anything. This is a key nuance in agent design.
There’s a preponderance of evidence to suggest today’s crop of agentic frameworks1, such as Deepgram’s Voice-to-Voice API and OpenAI’s Real-time Voice API, must su…
Keep reading with a 7-day free trial
Subscribe to Impertinent to keep reading this post and get 7 days of free access to the full post archives.