5 min read
The Morning After: What That 30-Minute Phoenix App Actually Costs
Why Working Code and Maintainable Architecture Are Not the Same Thing
Part of the AI Agents Series (Part 7 of 1)
First in series
Last in series

Hello Curious Coders,

In the last post of this series, I showed how Bruce built a complete Phoenix LiveView application in 30 minutes. Conway’s Game of Life, pattern library, speed controls, working interface. And I promised in the next post, I’ll open the code and show what really came out of those thirty minutes.

So here it is. =)

This isn’t criticism for its own sake. The app works, and that’s genuine. Few people could build something this functional in that time frame. But when Bruce opened the editor and started reading, what he found was a useful reminder: working code and maintainable code aren’t the same thing.

Generation speed and architectural quality are separate variables. You can get one without getting the other.

“What you’re going to wind up with is a massive system that is going to be too difficult to maintain long term, even for the computers.”

– Bruce Tate

The important point is not that the model failed. In several places, it did very well.

What Claude Got Right

The first module Bruce opened was the functional core. The data structure choice was smart: a MapSet of {row, column} tuples for live cells. If a coordinate is in the set, the cell is alive. If it’s absent, the cell is dead. That’s a clean representation for a sparse grid where most cells are empty at any given time.

The structure followed CRC – Construct, Reduce, Convert – the pattern Groxio developers use to keep functional modules predictable. A new/1 that builds state. A live?/2 that queries it. A toggle_cell/2 that transforms it. A next_generation/1 that steps it forward. Type specs on every function. Pure functions you could test without a browser, without a socket, without any LiveView machinery involved.

“We would expect this,” Bruce said, “because this is literally a functional core. And it turns out that the large language models are very good at core code.”

That’s true. The trouble starts when you leave the core.

🎯 Join Groxio's Newsletter

Weekly lessons on Elixir, system design, and AI-assisted development — plus stories from our training and mentoring sessions.

We respect your privacy. No spam, unsubscribe anytime.

What the LiveView Revealed

The GameLive module is where things get messy.

Start with formatting. The generated code would fail mix format immediately. A small thing, but it signals that the code was written to run, not to fit into an Elixir codebase.

The mount section mixed concerns with no clear organization: UI assignments, grid state, timer behavior, and generation details all tangled together. The module was also missing @impl LiveView, which matters because consistency is what lets multiple developers and multiple agents collaborate safely.

Then came the real cost center: a 277-line render function with no meaningful decomposition. The game view, the forms, and the controls were all candidates for function components. Instead, the generated code folded everything into one surface. That is not a style nit. It directly affects the next change request. Small UI adjustments now require scrolling through a massive template and hoping you don’t regress unrelated behavior.

Event handlers had the same problem: too much logic inline. One handler (reschedule_if_running) already showed the right direction. The rest didn’t follow.

The Practical Fix: Skinny Reducers

The temptation when you see these problems is to fix them with bigger prompts. But there’s a pattern that stops the rot immediately: skinny reducers.

Instead of cramming logic directly into event handlers, extract private functions that take a socket and return a socket.

# Before: fat handler
def handle_event("toggle", %{"x" => x, "y" => y}, socket) do
  grid = socket.assigns.grid
  new_grid = ConwayGame.Game.toggle_cell(grid, {String.to_integer(x), String.to_integer(y)})
  {:noreply, assign(socket, grid: new_grid)}
end

# After: skinny handler + reducer
def handle_event("toggle", params, socket) do
  {:noreply, toggle_cell(socket, params)}
end

defp toggle_cell(socket, %{"x" => x, "y" => y}) do
  new_grid = ConwayGame.Game.toggle_cell(
    socket.assigns.grid,
    {String.to_integer(x), String.to_integer(y)}
  )
  assign(socket, grid: new_grid)
end

The same principle applies to mount. Instead of one flat list of assigns, split by concern:

def mount(_params, _session, socket) do
  {:ok,
   socket
   |> assign_grid_state()
   |> assign_timer_state()
   |> assign_ui_state()}
end

defp assign_grid_state(socket),
  do: assign(socket, alive_cells: MapSet.new(), generation: 0, grid_size: 30)

defp assign_timer_state(socket),
  do: assign(socket, running: false, timer_ref: nil, speed: 200)

defp assign_ui_state(socket),
  do: assign(socket, pattern: :blinker)

Once the seams are visible in mount, the same pattern applies to the render function and to event handlers.

What This Changes

If you’re fixing these problems by writing longer prompts each session, you’re not solving the real issue:

“If my prompts become as long as my programs, what have I really gained?”

– Bruce Tate

The answer is to encode conventions once at the project level, not repeat them in every session. Formatter config, @impl requirements, LiveView structure rules – these belong in a .claude/conventions.md file at your project root. The next time the model opens your codebase, those aren’t preferences it has to guess. They’re facts it can read.

Vibing gives you working code in minutes. But working and maintainable are not the same thing. If you can’t explain your architecture to the AI, it can’t build one for you.

In the next post, we’ll look at how to build a layered prompt architecture that encodes your preferences once: project-wide rules, specialized roles, and repeatable workflows. Defined once, available everywhere.

If you want to see how we teach these workflows inside a real Elixir project, the Groxio AI course covers this in depth at grox.io/courses.


🤖 Learn Structured Oversight, Not Autopilot Acceptance

This comes from Bruce's AI Agents course — the anti-vibe-coding curriculum. Learn when to let AI build fast and when to say no, with the Ask → Plan → Agent framework and checkpoints that keep progress without losing architecture decisions or becoming a crutch. Available via monthly subscription — try it for one month.

– Paulo & Bruce

Part of the AI Agents Series (Part 7 of 1)
First in series
Last in series
Bruce Tate's avatar
Bruce Tate
System architecture expert and author of 10+ Elixir books.
Paulo Valim's avatar
Paulo Valim
Full-stack Elixir developer and educator teaching modern Elixir and AI-assisted development.