Hello Curious Coders,
Before we talk about how to use AI well, it helps to step back and name a problem most people run into almost immediately.
You ask a model a question and get a surprisingly good answer. A few messages later, it forgets something that seemed important. Sometimes it even contradicts itself with full confidence.
The first time this happened to me, I caught myself reacting as if the model just wasn’t paying attention.
But that’s not what’s going on.
In this post, I want to explain why large language models behave this way, introduce a mental model Bruce uses in class to make the behavior predictable, and show you one practical habit you can start using right away.
🎯 Join Groxio's Newsletter
Weekly lessons on Elixir, system design, and AI-assisted development — plus stories from our training and mentoring sessions.
We respect your privacy. No spam, unsubscribe anytime.
The Confusion Most People Run Into
You explain your system carefully. You give context. You describe constraints. The model appears to follow along.
Then, ten messages later, it behaves as if none of that ever happened.
In practice, it’s often something simple. You start by saying you’re building a Phoenix application using LiveView. Later, you ask about authentication — and the model responds with a generic answer that ignores LiveView entirely.
At that point, most people ask the same question:
“If it understood me before, why doesn’t it remember now?”
Instead of blaming the model, it helps to name what’s actually happening.
The Conveyor Belt Model
Bruce uses a metaphor in class that makes this behavior predictable without oversimplifying what the model can do.
Imagine a conveyor belt.
Text goes in one end, a piece at a time — broken into tokens, roughly chunks of words or punctuation. The model processes whatever is currently on the belt. As new text arrives, older pieces fall off the other end. Every model has a limit to how much it can see at once. Some handle a few thousand tokens, others far more, but they all hit a ceiling.
The model never looks backward. It only works with whatever is still visible.
There’s no hidden long-term memory. No internal notebook. No persistent mental state like a human has. There’s only the current context window. Once something leaves that window, it’s no longer part of what the model can reason about.
“The model never looks backward. It only works with whatever is still visible on the belt.”
— Bruce Tate
That’s why, in the LiveView example, the model gave you a generic Plug-based answer. By the time you asked about authentication, the fact that this was a LiveView application had already fallen off the belt.
Why This Fits Elixir Surprisingly Well
This limitation lines up neatly with Elixir’s philosophy.
Elixir pushes you toward explicit structure and away from hidden state. It rewards systems where decisions are encoded in modules, names, and documentation instead of living in someone’s head.
Large language models behave the same way. They struggle when structure is implicit. They do much better when systems are legible: meaningful names, clear responsibilities, and written-down invariants.
The model isn’t becoming an Elixir expert. It’s responding to structure.
There’s also a quieter advantage here. A lot of high-quality, production-grade Elixir code is available in the open. When a model generates Elixir, it’s often drawing from real systems with strong conventions, not just toy examples. That shows up in the output — especially if you keep the model anchored to your architecture.
What You Can Do Right Now
Understanding the conveyor belt model is useful. But here’s the practical part.
If you’re about to ask the model a question that depends on an earlier decision, restate that decision explicitly. Don’t assume it’s still in view.
Instead of asking:
How should I handle authentication?
Ask:
In this Phoenix LiveView application, how should I handle authentication
while preserving the connection lifecycle we've been working with?
That small restatement brings the right context back onto the belt.
This can feel repetitive at first. But once you understand how the model works, repetition stops feeling like waste. It starts feeling like good engineering.
A simple rule of thumb:
When asking a follow-up question after several exchanges, restate your framework and your key constraint. Then ask the question.
Two extra sentences. A much better answer.
Closing Thought
AI doesn’t forget because it’s careless. It forgets because it’s honest about its limits.
Once you understand those limits, you stop fighting the tool and start designing with it. That shift — from frustration to leverage — is where these systems become genuinely useful.
In the next post, we’ll look at the question that naturally follows: which AI agent should you actually use? Claude, ChatGPT, Cursor — not in theory, but in real workflows with real costs.
See you in the next chapter.
— Paulo Valim & Bruce Tate
Go Deeper with the Groxio Yearly Subscription
This article is part of Groxio’s ongoing work on AI-augmented Elixir development. The Groxio Yearly Subscription gives you access to our full Elixir curriculum — including Elixir, LiveView, OTP, and our new AI course. The first AI module is already available, with more releasing this winter.