Building a system for AI-assisted engineering
How a .ai/ directory and project constitution turned ad-hoc AI coding sessions into a repeatable engineering workflow.
Contents
Every session started the same way — explaining the tech stack, the API design, the coding conventions. The AI had no memory of the last conversation, and the repetition was a drag on productivity. The problem was never capability. It was context.
This pushed me away from conversational “vibe coding” — the term Andrej Karpathy coined for unstructured AI-assisted development — and toward a repeatable system. The goal: a single source of truth that onboards the AI to any project instantly.
Vibe coding works fine for brainstorming, exploring ideas, or writing throwaway scripts. But for long-lived systems, you need something that keeps the codebase coherent across sessions. The emerging discipline of context engineering — systematically managing what goes into an AI’s context window — is how you get there.
The .ai/ directory
The core of the system is a .ai/ directory at the project root. It acts as the project’s external brain — a set of markdown files that define its DNA:
.ai/
├── PRD.md # Product requirements and personas
├── SCHEMA.md # Data structures and API contracts
├── DESIGN_SYSTEM.md # UI tokens and component patterns
├── VIEWS.md # UI layouts and route structure
├── ROADMAP.md # Prioritized task backlog
└── DECISIONS.md # Architecture Decision Records
Each file serves a specific purpose. SCHEMA.md defines API contracts so the AI doesn’t invent incompatible data structures. DESIGN_SYSTEM.md enforces visual consistency. DECISIONS.md records the why behind architectural choices — borrowing from Michael Nygard’s ADR pattern — preventing the AI from relitigating settled debates.
Keep context files focused and modular — one concern per file. Overloading a single file bloats the context window and dilutes the signal.
The project constitution ties it together. A CLAUDE.md at the root defines the tech stack, coding standards, and behavioral rules. Everything the AI needs before writing a single line of code. Anthropic’s Claude Code loads this file automatically into context, and other tools support similar patterns (.cursorrules, .windsurfrules).
Structuring work with skills and commands
With context in place, the next step was defining the workflow. It breaks into three concepts:
- Skills — reusable modules encoding expertise (how to approach UI design, structure a REST API)
- Rules — the project constitution defining constraints and conventions
- Commands — executable workflows combining skills and rules for specific actions
The commands cover the full development lifecycle:
| Command | Purpose |
|---|---|
/x-feature | Plan new features from a product perspective |
/x-ask-architect | Discuss architecture with diagrams and trade-offs |
/x-plan | Read project context and identify the next task |
/x-build | Implement code according to established patterns |
/x-verify | Run linters, builds, and tests to validate work |
/x-docs | Update the roadmap and record decisions |
/x-discover | Reverse-engineer an existing codebase into .ai/ structure |
The key distinction is between planning and execution. Planning commands (/x-feature, /x-ask-architect) are conversational — they simulate discussions with domain experts, complete with diagrams and trade-off analysis. Execution commands (/x-plan, /x-build, /x-verify, /x-docs) are deterministic — they take context and planning outputs, then perform concrete actions.
This structured cycle ensures no steps get skipped. Plan, build, verify, document — then iterate.
Why this works
The system addresses the AI’s primary limitation: its context window is its only working memory. By externalizing project knowledge into files, you get:
What you gain
- Persistent memory — context survives across sessions
- Shared understanding — the
.ai/directory is a single source of truth - Guardrails — schemas and design systems prevent architectural drift
What to watch for
- Stale context files that haven’t been updated as the project evolves
- Treating context as a one-time setup rather than an iterative process
- Over-stuffing files with detail that exceeds the context window
The system is also self-improving. When you find a gap in a workflow, you update the corresponding skill or command. It evolves with the project, not apart from it. Version control your .ai/ directory alongside your code — it’s as much a part of the project as your source files.
Build your own
This is one approach, not the only one. Systems like GitHub’s spec-kit take a similar idea in a different direction. The specifics matter less than the principle: externalize your project’s knowledge into structured files that any AI can consume.
My version is on GitHub: xcke/x-ai-system if you want a starting point. But tools like Claude Code’s skill creator mean you can design a system tailored to how you work — your stack, your conventions, your workflow. Start there. The best system is the one you build for your own projects.