The Context Paradox: Why Generative AI Needs More Than Just Data
Stop Treating Context as an Afterthought: Design for it. Invest in it. Test it.
Let’s be clear: generative AI isn’t failing because the models are too simple, or because the code is buggy, or even because the underlying math is misunderstood. It’s failing—again and again—because we hand it a jigsaw puzzle with half the pieces missing and then marvel at the nonsense it produces. This isn’t a technical shortcoming; it’s a context crisis.
The Mirage of “Smarts” in Agents
We’ve entered an era where prompt engineering is the new black, but as Phil Schmidt (DeepMind) recently noted,
“The main thing that determines whether an agent succeeds or fails is the quality of the context you give it. Most agent failures are not model failures anymore; they are context failures.”
This is not merely a semantic distinction. It’s a fundamental shift in how we should approach AI: from coding logic to engineering context.
Think about it—when you ask an LLM to generate a customer service script, summarize a legal document, or design a workflow, what you’re really doing is giving it a sliver of the information it needs. The rest—intent, history, nuance, the “why” behind the “what”—is left to the model’s imagination. And that’s where things go off the rails.
Fragmented Memory: The Silent Killer
As I wrote in Memory providers of large language models are embracing memory, but the result is often a fragmented puzzle with missing pieces. You can’t copy and paste your way to pervasive intelligence. When your context is scattered across Slack threads, emails, and docs, no amount of LLM horsepower will make the machine “understand” your world. The illusion of comprehensive memory is just that—an illusion. It’s why I use Pieces. It’s also why I use Flowith. Both are context-building frameworks that lean into sustainable backplanes of relevant knowledge.
Contexting: The Real Work of AI
In I Don’t Code, I Context, I argued that the future isn’t about writing code—it’s about constructing the proper context. “Contexting” is the act of actively shaping the circumstances, background, and meaning for an AI agent to operate intelligently. It’s not enough to give an agent a task; you must give it the story, the rules, the exceptions, and—critically—the boundaries. I spend hours, sometimes days, building narratives and unit-testing contexts, because a well-structured context is the difference between an agent that’s merely functional and one that’s transformative. Most AI users spend minutes, perhaps less, setting the table for an LLM.
The Cost of Context Failure
When context is thin or misaligned, you get hallucinations, false positives, and unreliable results. This isn’t just an inconvenience—it’s a systemic risk. Businesses deploying generative AI without robust context grounding are courting disaster, whether it’s a chatbot gone rogue or a workflow that derails compliance. The remedy? Retrieval-augmented generation (RAG), comprehensive long-term memory, and ruthless attention to context engineering.
Tips for your AI-Driven Future
Stop Treating Context as an Afterthought: Design for it. Invest in it. Test it.
Engineer for Comprehensive Memory: Aggregate, unify, and validate the fragments before you ever hit “generate.”
Unit-Test Your Contexts: If you wouldn’t ship untested code, why ship untested context?
Recognize Context as the Competitive Edge: The difference between a cheap demo and a magical product isn’t the model—it’s the context.
Phil Schmidt is right: “Context engineering” is the new skill. The next wave of generative AI breakthroughs won’t be powered by bigger models or cleverer code, but by those who master the art and science of context.
If you think otherwise, you’re missing the most critical piece of the puzzle.
Does this mean functional specs are back?