7 min read
Stop Prompting. Start Anchoring.
Why Decisions Should Live in Code, Not Conversations

Hello, Curious Coders

If you’ve been working with AI tools for a while, you’ve probably noticed something frustrating.

You make a clear decision. You explain it carefully. The model agrees.

Then, a few steps later, it quietly violates that decision.

At first, this feels like the same “forgetting” problem we talked about earlier. And partly, it is. But there’s another layer here that matters more once you start using AI every day.

The real issue isn’t that the model forgot. It’s that the decision never lived anywhere durable to begin with.

In this post, I want to name that problem clearly, show you what durable decisions actually look like in code, and give you one habit you can start using immediately.

Where Things Quietly Go Wrong

You explain the architecture in chat. The model responds intelligently, mirrors your language, seems aligned. So you move on.

A few prompts later, it suggests a change that quietly bends the system out of shape. Nothing breaks, but the code becomes harder to reason about.

From the model’s point of view, nothing went wrong. The earlier explanation simply isn’t part of context anymore. And even when it is, chat explanations are fragile. They’re easy to agree with and just as easy to drift away from.

This is where most people try to fix the problem with better prompts.

That usually makes things worse.

🎯 Join Groxio's Newsletter

Weekly lessons on Elixir, system design, and AI-assisted development — plus stories from our training and mentoring sessions.

We respect your privacy. No spam, unsubscribe anytime.

The Wrong Fix: More Prompting

When drift shows up, the instinctive response is to explain more.

Longer prompts. More careful wording. Repeated reminders.

That can help temporarily. But it doesn’t scale, and it doesn’t hold up over time.

Why? Because prompts are conversations. Conversations disappear. Even with large context windows, they’re not a stable place to store decisions that matter.

If your system only works when the model remembers what you said earlier, it’s already fragile.

So instead of asking, “How do I prompt better?”, there’s a more useful question.

The Better Question

Where does this decision live if the chat disappears?

If the answer is “nowhere,” that’s the problem.

AI collaboration becomes reliable when decisions live in artifacts, not conversations.

That’s the shift.

What “Anchoring” Actually Means

Anchoring doesn’t mean writing documentation for the sake of documentation. And it doesn’t mean adding comments everywhere.

Anchoring means encoding intent in places the model will see at the moment it generates code.

In Elixir, that usually means module names that carry responsibility, function names that encode behavior, and directory structure that reflects ownership. It means supervision trees that make process boundaries explicit, type specs that constrain behavior, and tests that assert invariants.

These aren’t just for humans. They’re anchors.

They reduce the number of decisions the model can accidentally make for you.

A Concrete Example

Let’s look at what this actually means in code.

Weak Anchoring (Prompt-Dependent):

defmodule MyApp.Helpers do
  # Process order data and calculate totals
  def process(data) do

You explain in chat: “This validates orders and calculates pricing.”

The model generates something reasonable. But a few prompts later, when you ask it to “add discount logic,” it might put the code anywhere because nothing in the structure constrains where discount logic belongs.

Strong Anchoring (Structure-Dependent):

defmodule MyApp.OrderProcessing do
  @moduledoc """
  Handles order validation and pricing calculations.
  Pricing logic is centralized here to maintain consistency.
  """

  @doc "Validates order data and calculates totals"
  def validate_and_price(order_params) when is_map(order_params) do

Same task. But now the model sees:

  • Module name signals domain
  • Function name signals behavior
  • Documentation anchors constraints
  • Guard clause anchors types

When you ask to “add discount logic,” the model knows it belongs in MyApp.OrderProcessing, not scattered across helpers.

You didn’t write a longer prompt. You anchored the decision in structure.

An Architectural Example

Here’s where this gets more important.

Imagine a LiveView that owns real-time state for a session.

In chat, you explain: “This LiveView owns the session state. Other processes should not mutate it directly.”

The model agrees.

Later, you ask it to “simplify” something, and it suggests moving that state into a shared GenServer for convenience.

Nothing in the code told the model that this would violate an important constraint. The constraint lived only in your explanation.

Now imagine the same system, but with:

  • A LiveView module named SessionLive
  • Functions like update_session/2 scoped to that module
  • No public API for external mutation
  • A test that asserts state ownership behavior

You can give the same instruction, and the model is far less likely to drift. Not because it’s smarter, but because the structure makes the wrong move harder to justify.

You didn’t remind the model. You constrained it.

“Don’t ask the model to remember. Make it impossible to forget.”
— Bruce Tate

Why This Fits Elixir Perfectly

Elixir already trains you to write code this way.

Small, focused modules tell the model what kind of work belongs where. Meaningful names guide completions. Explicit boundaries show what’s public versus private. Type specs constrain what’s valid, and documentation encodes intent the model can read.

When you follow Elixir’s conventions, you’re not just writing clear code for humans. You’re creating a navigation system for AI.

Good structure doesn’t just make code readable. It makes it completable.

One Practical Habit You Can Use Today

Before asking an AI agent to change code, pause and ask:

“If this decision matters, where does it live in the code?”

If the answer is “only in this chat,” stop.

Anchor it first:

  • Check the module name. Does it signal the kind of work you want?
  • Check the function signature. Does it show types, constraints, or patterns?
  • Check nearby functions. Do they demonstrate the boundaries?

If any of those are vague or generic, fix them first. Rename something. Add a boundary. Write the test. Make the constraint visible.

Then ask the model to proceed.

Two extra minutes up front saves hours of cleanup later.

Why This Matters as Systems Grow

As your system grows, so does the surface area for drift. More files, more agents, more changes over time.

Anchoring isn’t about controlling the model. It’s about controlling where decisions are allowed to happen.

Once intent lives in code, you can switch models, restart conversations, or bring in new tools without losing coherence.

That’s when AI starts to feel like leverage instead of risk.

Closing Thought

Prompting is ephemeral. Structure is durable.

When you move decisions out of chat and into artifacts, you stop babysitting the model and start collaborating with it.

The shift from “how do I prompt this?” to “how do I structure this?” is where AI stops feeling like a fight and starts feeling like a tool.

In the next post, we’ll build on this and look at how to introduce checkpoints and summaries so you can detect drift early, before it compounds across files and sessions.

See you in the next chapter.

— Paulo Valim & Bruce Tate


Want to Go Deeper into AI-Augmented Elixir?

This article is part of our series on building reliable systems with AI without losing control of your architecture. Our AI course walks through the workflows, guardrails, and architectural patterns that keep AI productive instead of chaotic.

Bruce Tate's avatar
Bruce Tate
System architecture expert and author of 10+ Elixir books.
Paulo Valim's avatar
Paulo Valim
Full-stack Elixir developer and educator teaching modern Elixir and AI-assisted development.