Why Your AGENTS.md Is Actually Hurting You
Research shows that stuffing your AI instruction files with rules increases cost and barely improves results. Here's a better approach.
You start a new project and the first thing you do is open the instruction file to fill it with every rule you can think of: coding standards, architecture decisions, file structure, testing preferences. It feels like you're setting the AI up for success, but you're actually making it slower, more expensive, and no better at its job.
More Instructions, Worse Results
The common advice is to pack your instruction file with as much context as possible. CLAUDE.md, .cursorrules, AGENTS.md, whatever your tool calls it. Project structure, naming conventions, preferred libraries, commit message format. The assumption is simple: more context means better output.
Research says otherwise.
In February 2026, researchers from ETH Zurich published "Evaluating AGENTS.md: Are Repository-Level Context Files Helpful for Coding Agents?", testing four AI coding agents across hundreds of real-world tasks. Here's what they found:
- LLM-generated instruction files reduced success rates by ~3% and increased costs by over 20%.
- Human-written files barely helped. 4% improvement on average, but costs still went up ~19%.
- Instruction files only made a difference when repositories had zero other documentation.
That last point matters. If your codebase has a README, doc comments, and a reasonable structure, the instruction file is noise. The AI already reads your code, your config files, your folder names. It picks up on most of what you're spelling out.
Here's the other problem. Every token in that file loads into every conversation. You're fixing a one-line CSS bug, but your 200-line instruction file about database migrations and API conventions is sitting in context, burning money and competing for attention.
Make Your Codebase Speak for Itself
Before writing a single rule, ask: can I fix the codebase instead?
AI tools read your code, your linter config, your folder structure. When those are consistent, the AI follows conventions without being told. A well-configured ESLint setup teaches more than a paragraph about formatting preferences. Clear folder names say more than a file tree pasted into an instruction file.
The AI keeps using var instead of const? Don't write a rule. Add an ESLint rule. It puts files in the wrong place? Your folder structure might be unclear.
Fix the root cause. The instruction file is the last resort, not the first.
Use Hooks to Enforce What Matters
Most AI coding tools support hooks: scripts that run automatically before or after specific actions. Think of them as guardrails that catch problems in real time instead of hoping the AI remembers a rule you wrote.
A linting hook is the obvious example. Instead of writing "always follow our ESLint config" in your instruction file, set up a hook that runs the linter after every code change. The AI gets instant feedback when it breaks a rule, and it fixes the issue on the spot. No instruction needed.
But hooks go further than linting. You can run type checking after edits so the AI never leaves behind a TypeScript error. You can validate that test files exist for new modules, enforcing coverage standards without spelling them out. You can check that imports follow your dependency boundaries, preventing the AI from pulling in packages from the wrong layer. You can even run security scanners to catch vulnerabilities before they make it into a commit.
The pattern is the same every time: instead of describing the rule in prose and hoping the AI follows it, write a script that checks the rule and let the tool run it automatically. Hooks turn "please remember this" into "this will fail if you don't." That's a much stronger guarantee than any instruction file can offer.
The Self-Learning Loop
Let your instruction file grow from real problems instead of front-loading it with everything you can imagine.
1. Start Empty
No rules. Let the AI work with what the codebase gives it. Most of the time, it figures things out on its own.
2. Something Goes Wrong? Fix the Codebase First
The AI formats code differently than you expect? Check your linter config. Tests end up in the wrong folder? Look at whether your test structure is actually consistent. Most of the time, the AI is reflecting an inconsistency that already exists in your project.
3. Still Happening? Now Add a Rule
The codebase is consistent and the AI still gets it wrong. That's a genuine lesson learned, and it belongs in the instruction file. Don't write it from scratch though. Ask the AI to draft a concise rule based on the mistake it just made. It has the context and can articulate the fix clearly.
4. Keep It Clean
Review the file every time you add something. Look for duplicates, contradictions, and rules the codebase now handles on its own. Build a skill to automate this review if you want (more on skills below). The file should get better over time, not just longer.
That's the self-learning loop. Problems surface naturally. You fix the root cause first. Only the lessons that can't live in the codebase itself earn a spot in the instruction file.
Not Everything Belongs in the Root File
When you do add instructions, think about where they belong. Most AI coding tools support a hierarchy, and each level has a different cost profile.
Root instruction file applies everywhere and loads every time. Keep it to validation commands and a handful of hard-won lessons. Nothing else.
Subfolder instruction files scope rules to part of your codebase. If you have a frontend folder and an API folder, they probably follow different conventions. Put each set of rules where it belongs. There's no reason to load React component guidelines when the AI is writing a database query.
Skills are on-demand instruction sets. They only load when you invoke them. Use them for specific workflows like creating a new API endpoint, writing a database migration, or drafting a blog post. When they're not invoked, they cost nothing. Most major AI coding tools support them in some form.
The root file is always loaded. Subfolder files load in context. Skills load on demand. Push instructions as far down that hierarchy as they can go.
What a Lean Instruction File Looks Like
Here's what most instruction files look like:
## Project Overview
This is a Next.js 15 app with TypeScript, Tailwind CSS, and Prisma.
We use the app router. Pages go in app/. Components go in app/components/.
API routes go in app/api/.
## Code Style
- Use TypeScript strict mode
- Prefer const over let, never use var
- Use arrow functions for components
- Use named exports, not default exports
- Use Tailwind for styling, no CSS modules
- Follow ESLint and Prettier configs
## Testing
- Use Vitest for unit tests
- Use Playwright for e2e tests
- Run npm run test before committing
- Test files go next to the files they test
## Git
- Use conventional commits (feat:, fix:, chore:)
- Keep commits small and focused
- Always write descriptive commit messages
## API Conventions
- All endpoints return JSON
- Use proper HTTP status codes
- Include error messages in response body
- Use zod for request validationMost of this already lives in the codebase. ESLint config, tsconfig, folder structure. The rest is obvious enough that the AI would do it without being told. All those tokens load on every interaction for almost no benefit.
Here's the same project after applying the self-learning loop:
## Validation
Run `npm run lint && npm run typecheck && npm test` before considering
any task complete.
## Lessons Learned
- Never use `fs.existsSync` in API routes. Use `fs.access` with
try/catch. The sync version blocks the event loop and caused
a production incident.
- When adding a new database migration, always check if the previous
migration has been deployed first. We run migrations manually
in staging.
- Do not import from `@/lib/legacy`. Those modules are being phased
out. Use `@/lib/core` instead.
- All API responses must include a `requestId` field. Our logging
pipeline depends on it for tracing.Every line earned its place by solving a real problem. The AI could not have figured out any of these from the code alone. That's the bar.
Your Instruction File Is a Journal, Not a Manual
Stop treating it like a manual that needs to cover everything. It's a journal. Hard-won lessons go here so you don't learn them twice.
If you haven't hit the problem yet, the rule doesn't belong there. If the codebase can teach it, let the codebase teach it. If it only matters sometimes, make it a skill.
Start empty. Let problems guide you. Keep it lean. Your AI will be faster, cheaper, and just as effective. The research backs it up.
Stay up to date
Get the latest AI coding best practices straight to your inbox.