The prompt stack
Prompting is the foundation, not the ceiling. Four cumulative disciplines — prompt craft, context engineering, intent engineering, and specification engineering — define how humans communicate with autonomous AI agents.
Contents
AI agents don’t wait for you to hit enter after every step. They run for minutes, hours, sometimes days — chaining tool calls, reading files, making decisions. The prompt-and-iterate loop that defined 2024 doesn’t hold when the agent is 14 steps deep and you’re not watching.
That shift created a problem: how do you control something that operates autonomously? The answer isn’t one skill — it’s four, stacked on top of each other. Nate B Jones walks through this well — the word “prompting” is now hiding four completely different skill sets, and most people only practice one.
Prompt craft
The original skill. You sit at a chat window, type an instruction, evaluate the output, and iterate. Good prompt craft means clear instructions, relevant examples, explicit output format, and resolving ambiguity upfront so the model doesn’t guess.
This was the differentiator in 2024. It’s table stakes now — the equivalent of touch typing. You need it, but it won’t set you apart. The real limitation: prompt craft assumes you’re there to course-correct. Autonomous agents break that assumption.
Context engineering
Where prompt craft focuses on instructions, context engineering focuses on information. It’s the discipline of curating what the agent sees at each step — system prompts, tool definitions, retrieved documents, memory, project files.
The goal is to make the task “plausibly solvable” by the model without it needing to ask follow-up questions or guess. In practice, this means building infrastructure — CLAUDE.md files, RAG pipelines, .ai/ context directories — so agents start every session with the right tokens loaded.
The hard part isn’t adding information. It’s keeping bad tokens out. LLMs degrade when overwhelmed with irrelevant context, so curation matters as much as retrieval.
Intent engineering
Context handles what the agent knows. Intent handles what the agent wants.
This is where organizations fail at scale. Klarna’s AI assistant handled 2.3 million conversations in its first month, doing the work of 700 agents. Resolution time dropped from 11 minutes to under 2. Then they started hiring humans back — customer satisfaction was declining. The agent optimized for speed because nobody encoded that satisfaction mattered more.
Intent engineering means writing down goals, values, trade-off hierarchies, and decision boundaries. What can the agent decide on its own? What gets escalated? A bad prompt wastes your morning. Bad intent engineering compromises a team.
Specification engineering
The highest layer. Your organization’s documents — strategy, OKRs, acceptance criteria, runbooks, decision records — become machine-readable infrastructure that agents execute against over days or weeks without checking in.
The skill shifts from verbal fluency to completeness of thought. Can you decompose a project into independently executable parts? Can you anticipate edge cases and write acceptance criteria that an agent can verify without asking clarifying questions? Specification engineering rewards people who think in complete problem statements.
If you maintain a CLAUDE.md in your repos, you’re already doing this at the project level. The organizational version applies the same discipline to business processes.
The stack is cumulative
Each layer depends on the ones below. You can’t write specifications without understanding how to curate context. You can’t encode intent without knowing how to structure a prompt. Skip a layer and the layers above it break.
This progression forces clarity. Stating a problem completely, defining what “done” means, making constraints explicit — these are prerequisites for delegating to an AI agent. They also happen to be what effective delegation to humans requires. The difference: a human asks a clarifying question when the spec is ambiguous. An agent guesses.