Alignment before autonomy
The more autonomy you give an AI agent, the more alignment you need upfront. Without shared understanding, the agent fills every gap with training-data defaults — and builds the wrong thing confidently.
Contents
I recently watched Matt Pocock’s workshop on AI coding workflows and his companion talk “Software Fundamentals Matter More Than Ever”. Both crystallized something I’d been experiencing but hadn’t articulated well: the more autonomy you grant an AI agent, the more critical upfront alignment becomes. Skip it, and the agent doesn’t ask — it decides. And it decides based on whatever the mainstream representation in its training data suggests.
The prevailing narrative is that AI changes the rules. That you can write specs and let agents do the rest. That fundamentals matter less when the machine handles implementation. I think the opposite is true. Bad code is more expensive now than it has ever been, because AI amplifies entropy. A messy codebase doesn’t just slow you down — it degrades every future agent interaction. The model makes worse decisions in a tangled context. The feedback loops get longer. The drift compounds.
The root failure mode
The first thing that goes wrong in agentic coding isn’t bad architecture or missing tests. It’s misalignment. You have a design concept in your head — an invisible, ephemeral theory of what you’re building. The agent doesn’t share it. When you hand it autonomy without externalizing that concept, it fills every gap with defaults. Not bad defaults, necessarily — but its defaults, not yours. The constraints and trade-offs that matter to your project get silently overridden by whatever pattern the training data favors.
I start almost every workflow now with a “Grill Me” session — an approach where the AI relentlessly questions me about every aspect of what I’m building. Not to be adversarial, but to force the design out into the open. What are the constraints? What trade-offs am I making? What have I already decided? The session unpacks intent from both sides. I surface decisions I hadn’t consciously made. The AI surfaces approaches I wouldn’t have considered. By the time we’re done, the shared understanding exists as a transcript that can feed directly into a spec.
This isn’t just for me. The AI also has a design to unpack. Left unprompted, it carries implicit assumptions about how things should be built. The grill session forces those into the open too — I can accept them, reject them, or redirect them. That bidirectional unpacking is what makes the technique effective.
What alignment enables
Once you have shared understanding, the downstream techniques actually work.
Vertical slicing is one I’ve tested directly. When my CLAUDE.md includes explicit instructions about building thin vertical slices — touching the database, API, and UI in one pass — the agent produces something testable at every step. Without those instructions, it defaults to horizontal layers. It builds an elaborate schema, constructs the backend logic, and by the time we reach the UI, the schema turns out to be too complex for what the interface actually needs. Testing gets pushed too far down the road.
The fix is building a mini version of the application that’s functional at every increment. Small vertical slices that connect all the layers immediately. These play out well because you catch integration problems early, not after three layers of code exist with no user-facing validation.
Test-driven development works the same way. TDD gives the agent a tight feedback loop — write a failing test, make it pass, refactor. Without alignment on what you’re testing and why, the agent writes tests that validate its own assumptions rather than your requirements. With alignment, each test becomes a checkpoint that confirms you’re still building the right thing.
The pattern is consistent: alignment first, then structure. Vertical slicing and TDD are structure. They’re how you maintain alignment over time, across many small steps. But they can’t create alignment — they can only preserve it.
The scaling law
This has a clear relationship to autonomy. When you’re pair-programming with an agent — reviewing every output, course-correcting in real time — misalignment costs you a few wasted minutes. You catch it and redirect. But when you hand the agent a task list and walk away (what Matt calls “AFK loops”), a misalignment in the initial understanding propagates through every subsequent decision. The agent builds confidently in the wrong direction for hours.
The more autonomy you grant, the more alignment you need upfront. That’s the discipline. Not prompting better, not writing longer specs — making sure the shared understanding actually exists before the agent starts running.
Where conversational coding breaks down and spec-driven development begins.
Four cumulative disciplines for communicating with autonomous AI agents.