Using Linters to Direct Agents

Agents write the code; linters write the law. As we’re moving from “developers write code with AI assistants” to “developers orchestrate agents that build software”, the old guardrails—code reviews, conventions, tribal memory—aren’t enough. Agents need crisp, machine‑verifiable instructions that represent organizational standards. They need lint rules, not suggestions.
Linters don’t just catch style nits. They encode your architecture, boundaries, and ergonomics directly into the code generation loop—exactly where LLM agents operate. When you do this, Agent‑Native Development (AND) switches from “back‑and‑forth with a robot intern” to “fast, deterministic collaboration with a compiler‑like partner.” Agents learn to self‑heal by obeying lint rules. You get consistent code, fewer human iterations, and a codebase that scales with your team and your agents.
New categories of lint rules for agents
Think beyond style. The most effective agent-focused linters setups cover these categories:
- Grep-ability (Consistent formatting)
- Named exports over default; consistent error types; explicit DTOs
- Glob-ability (Code organization)
- Make file structure predictable so agents can place, find, and refactor code deterministically
- Architectural Boundaries
- No cross‑layer imports; domain-specific allowlists/denylists. Enforce module boundaries (e.g., feature folders can’t reach “app” internals)
- Security and Privacy
- Forbid plain‑text secrets; validate input schemas; no eval/new Function
- Testability and Coverage
- Require colocated tests; forbid network calls in unit tests; enforce async patterns
- Observability
- Require structured logging; attach error metadata; standardized telemetry naming
- Documentation Signals
- Require module docstrings or TSDoc on public APIs; ADR links for rule exceptions
How linters empower Agent‑Native Development

Linters turn human intent into machine‑enforced guarantees so agents can plan, generate, and self‑correct without waiting on humans. The flow is simple: humans define standards in Agents.md (the “why” and examples); those guidelines are encoded as lint rules with clear severity, autofix, and waiver policy; the same rules run on save, pre‑commit, CI, PR bots, and inside agent toolchains; “lint green” becomes the Definition of Done.
What linters enforce:
- Searchability: code is easy to find for agents and humans (deterministic names, surfaces, and paths).
- Glob‑ability / code structure: predictable file organization that supports safe, scripted change.
- Grep‑ability / consistent formatting: reliable text search and indexing across the repo.
- Testing discipline: presence of colocated unit tests and patterns that make tests meaningful.
Supporting pieces:
- Formatters (e.g., Prettier) provide the baseline consistency that makes grep easier
- Agents.md guide agent behavior; linters provide precise, automatic feedback that agents use to self‑heal until clean.
Net effect: Agents generate code, get automatic feedback from linters, and self‑learn/iterate until clean. You treat “lint passing” as a proxy for “conforms to architecture and best practices.” Linting is the executable spec that ties human intent to agent output, ensuring consistent, navigable code at scale.
Human feedback is the new bottleneck
A decade ago, the bottleneck was typing speed and library knowledge. Today, LLMs scaffold features in minutes. The constraint is how quickly we can turn human conventions into machine‑checkable rules so agents can run without waiting on feedback. Pre‑specify your standards as lint rules and wire them into the loop—agents get automatic feedback, self‑correct, and need fewer human interventions.
Consistency is the test:
- Can the agent place files deterministically—with the right names, layers, and error semantics?
- Can it wire imports the way your org expects?
- Does it follow your error‑handling conventions?
- Will it include required wrappers (auth, tracing, other middleware)?
- Can it refactor at scale without breaking contracts?
Quality now hinges on how well your standards are codified—and how reliably agents can obey them. Linters, once “style cops,” become the authoritative, executable spec for “how we build here,” bridging human intent and agent execution.
Linters to drive mass refactor migrations
Linters are a migration engine: by encoding the “new way” as failing rules and the “old way” as detectable patterns with autofix, you get a repo‑wide detector, an execution plan, and a guardrail that keeps the change done—turning once‑off rewrites into a continuous, agent‑native process that finds every instance, fixes it safely, and prevents regressions.
Examples:
- Upgrade React to hooks, forbidding class component patterns and enforcing functional components with hooks.
- Migrate from Moment.js to date‑fns, replacing legacy date formatting/parsing and removing the heavy dependency.
- Prepare for Node 22 by banning deprecated Node 18 syntax/APIs and requiring modern equivalents.
How to do it:
- Define rules that forbid legacy patterns and require target APIs/structures; start as warnings to tune, then promote to errors.
- Run the rules to surface all violations and prioritize by risk/owners.
- Use agents with autofixes/codemods to batch the changes and open PRs verified by lint/tests.
- Keep rules on the hot path (pre‑commit, CI, PR bots, agent toolchains) and iterate until false positives are gone.
How linters play with agents.md guidelines
Agents.md is the on‑ramp. It explains intent, patterns, and examples in human language so agents know what to aim for.
But guidance alone is brittle:
- Ambiguity: natural language has edge cases; agents “think they complied” when they didn’t.
- No guarantees: advice doesn’t fail builds; drift accumulates quietly.
- Limited reach: prose can’t verify cross‑file imports, architectural boundaries, or error semantics.
Linters turn that intent into a compiler‑like contract:
- Deep validation: AST/type‑aware checks catch structural and architectural violations (module boundaries, import policies, error taxonomy, file placement).
- On‑path enforcement: runs in local dev, pre‑commit, CI, PR bots, and inside the agent toolchain.
- Automatic feedback: precise messages and autofixes let agents self‑correct until green.
- Policy you can prove: “lint green” becomes the Definition of Done; waivers are explicit and time‑boxed.
Bottom line: Use both, but treat them differently:
- Agents.md = the “why” and the examples. It maps each guideline to a RuleID and links to rule docs/ADRs.
- Linting = the “how” and the guarantee. It encodes the rule, blocks violations, and provides machine feedback.
Code search-ability: make code easy to find, index, and refactor
Grep‑friendly code turns your repo into a reliable database for both humans and agents. It enables precise search, safe scripted refactors, and better retrieval for LLM context windows.
- Deterministic placement: Agents can predict file locations from names.
- Precise search: Named exports and absolute imports let agents and scripts locate definitions and usages without ambiguity.
- Safer refactors: Consistent filenames and exports enable codemods and large‑scale rewrites with low blast radius.
- Better retrieval: Vector and keyword search both improve when code has predictable shapes and identifiers.
If you adopt only one category, adopt this one
Example practices for a typescript codebase:
- Named exports and imports
- Why: ripgrep can precisely locate
export const Foo
and allimport { Foo } from ...
- Rule: ban default exports; enforce named imports
- Why: ripgrep can precisely locate
- Absolute import paths
- Why: tooling and agents can reason about provenance; fewer brittle
../../..
hops - Rule: ban relative imports across package boundaries; enforce
@app/feature/...
aliases
- Why: tooling and agents can reason about provenance; fewer brittle
- Filename and file‑organization conventions
- Why: predictable locations let agents place, find, and refactor files deterministically
- Rules:
enums
live inenums.ts
with only exportstypes
live intypes.ts
and can import fromenums
index.ts
re‑exports stable module surface- Unit tests colocated as
.test.ts
These constraints make the codebase scriptable. Agents can combine ripgrep‑style queries with deterministic write locations to execute large, safe refactors—exactly how senior engineers batch‑edit at scale.
Code organization example:
// src/users/enums.ts
export enum UserRole {
Admin = 'admin',
Manager = 'manager',
Member = 'member',
}
// src/users/types.ts
export type User = {
id: string;
role: UserRole;
email: string;
};
// src/users/helper.ts
import { UserRole } from '@/users/enums';
import { User } from '@/users/types';
export function canManage(user: User): boolean {
return user.role === UserRole.Admin || user.role === UserRole.Manager
}
Because it uses named exports, absolute imports, and deterministic file organization (enums.ts, types.ts, helper.ts), tools like ripgrep—and agents—can precisely find definitions/usages and perform safe, large‑scale refactors.
The lint development cycle

A tight, repeatable cycle that turns human insight into machine‑enforced policy—and uses agents to erase drift at scale.
- Observe drift: Spot a recurring anti‑pattern in review, logs, or metrics.
- Codify the rule: Prompt an LLM to draft an ESLint rule (severity, autofix, tests, docs) that encodes the standard.
- Surface violations: Run the rule across the repo to list precise locations and counts; triage by risk.
- Remediate at scale: Spawn parallel agents to apply autofixes/codemods, batch PRs, and verify with tests and lint.
- Prevent regressions: Put the rule on the hot path (pre‑commit, CI, PR bots, agent toolchains). Time‑box waivers and track compliance.
Result: every observed issue becomes an executable constraint, agents clean up today’s debt, and the codebase self‑heals against future drift.
How we lint at Factory
At Factory, linting is our first response, not an afterthought: when a bug or drift shows up in a review, test, or incident, we immediately codify it as a rule, wire it into local dev, pre‑commit, CI, PR bots, and our agent toolchains, and treat “lint green” as the merge gate. This turns every lesson into an executable constraint that agents obey by default and humans can’t accidentally bypass. Concretely, we maintain dozens of rules such as:
- enforce a 1:1 mapping between logic files and colocated unit tests;
- require proper error handling and a consistent error taxonomy;
- impose aggressive syntax and organization standards (named exports, absolute imports, deterministic file placement, module boundaries, no cycles);
- verify middleware is correctly bootstrapped (auth, tracing, logging) wherever it’s required.
- and much much more
The result is a codebase that self‑heals: new issues become rules, agents mass‑fix violations, and the guardrails prevent the same problem from landing twice.
The payoff
When guidelines become lint‑enforced law, agent‑native development stops being a promise and starts compounding: each rule you codify shrinks review overhead, eliminates a class of regressions, and turns drift into an auto‑fixed diff; each lint‑green merge makes the repo more searchable, refactorable, and teachable; each iteration converts tribal knowledge into an executable spec that agents obey by default. Over weeks this feels like smoother PRs; over quarters it becomes faster lead time, safer large‑scale changes, and fewer outages; over years it locks in architectural integrity while your teams redirect attention to design, domain, and product. The end state is a self‑healing codebase where consistency scales with headcount and agent horsepower, and every new standard you encode pays dividends forever.