Skip to main content
Core Concepts

The Rules Architecture

How WSD distributes guardrails across specialized locations instead of cramming everything into a single rules file.

The Monolithic Rules Problem

The prevailing approach to guiding AI assistants is to put every steering directive into one location. Behavioral preferences, coding style, project architecture, naming conventions, tool-specific quirks, framework definitions. All of it packed into a file that the AI reads at the start of every session. The file goes by different names depending on the platform (CLAUDE.md,1 the proposed AGENTS.md standard2), and some platforms are beginning to move from single files to rule directories (Cursor’s .cursor/rules/3 replacing the older .cursorrules).

But the underlying approach is largely the same: write your rules, dump them into context, hope the AI follows along.

This approach has two problems, and they compound each other.

The first problem is attention dilution. As the rules file grows, every rule competes with every other rule for the AI’s attention within its context window. Guidelines about Python test isolation sit next to guidelines about commit message formatting sit next to architectural principles about API design. When the agent is writing Python tests, the build naming convention rules are noise. When the agent is designing an API, the test isolation rules are noise. The AI cannot distinguish which rules are relevant to the current task because they are all presented with equal weight in an undifferentiated list. The result is inconsistent adherence — the AI follows some rules, ignores others, and the developer cannot predict which will be which.4

The second problem is maintenance collapse. A single file that governs everything becomes difficult to maintain as the project evolves. Adding a new coding convention means editing the same document where you defined the team’s communication protocols. Updating a framework definition risks accidentally breaking an unrelated project-specific rule. The file has no internal lifecycle management. Everything in it is treated as equally permanent, equally relevant, and equally important, even though different rules have fundamentally different characteristics. Some rules are stable across every project. Some emerge gradually during development. Some apply only to specific types of work or should be checked at certain times during the work. Cramming them all into one file erases these distinctions and makes the file progressively harder to manage.

The online discussions about AI rule adherence,5 the frustration of watching an AI assistant cheerfully violate the rules you spent hours crafting, are largely symptoms of these two problems. The rules exist. They are written clearly. But they are delivered in a way that undermines their own effectiveness.

Distribution Over Monolith

WSD takes the opposite approach. Instead of one file containing everything, it distributes guardrails across multiple specialized locations, each with a distinct purpose, lifecycle, and relevance scope. The right rule lives in the right place, loaded at the right time, for the right task.

Distribution is more than organizational tidiness. It solves the two problems that monolithic rules files create. Rules with different lifecycles (stable behavioral constraints versus evolving architectural decisions) can live in separate documents and be maintained independently. Rules with different relevance scopes can be loaded selectively rather than dumped wholesale into every session. And rules defined by different authorities are cleanly separated, so that updating WSD does not overwrite your design decisions, and adding a design decision does not require editing a system file.

WSD uses five locations for distributing guidance. Each is described below with its purpose, what belongs there, and how it fits into the overall architecture.

Agent Rules: The Behavioral Constitution

docs/read-only/Agent-Rules.md contains behavioral guidelines, what agents should and should not do regardless of the specific task at hand. These are the behavioral constitution of the project: stable, broadly applicable, and always loaded.

Agent Rules include several categories of guidance. There are foundational software engineering principles (DRY,6 SOLID,7 YAGNI,8 KISS9) that provide a shared baseline of engineering discipline. There are WSD-specific workflow rules governing how agents interact with the workscope system, how they handle file placement, what git commands they may use, and how they should communicate with Special Agents. There are mitigations for known LLM behavioral quirks, patterns that specific models tend to exhibit (creating temporary files in the wrong location, using terminal commands instead of editing tools, failing to read entire files) that are addressed with explicit countermeasures. And there are project-specific behavioral constraints

that apply broadly to your project.

The quirks section deserves special mention. It is organized by model, and the mitigations are designed to be swapped out when models change. If you switch from one AI model to another, you update the quirks section rather than rewriting your entire rules infrastructure. This separation of model-specific workarounds from project-level principles keeps the rules maintainable across the inevitable cycle of model releases and migrations.

Agent Rules also contain a section for pre-release rules, constraints that apply while a project is under active development and has not yet shipped. These address the tendency of AI assistants to defensively maintain backward compatibility and write migration code even when the product has no existing users. When the project ships, these rules can be disabled.

Agent Rules start a project fully fleshed out with WSD’s defaults. You customize them through WORKSCOPE-DEV tags — marked sections where your project-specific content is preserved during WSD updates. The rest of the file can be updated by WSD as the platform improves its defaults, without disturbing your customizations.

Design Decisions: The Evolving Record

docs/core/Design-Decisions.md captures project-specific architectural choices, naming conventions, and design philosophies that emerge over the course of development.

Unlike Agent Rules, Design Decisions always starts blank in a new project. It grows over time as the project’s identity crystallizes. You discover, through repeated implementation, what patterns work for this specific codebase and what patterns do not. When an agent uses an if/else chain where a registry pattern would be more extensible, and you realize this is a project-wide preference rather than a one-time correction, you run /add-dd to record the decision. Future agents follow it as a constraint.

Each entry has a consistent structure: context (what situation prompted the decision), the decision itself, the rationale (why this choice serves the project), an example showing correct and incorrect approaches, and a note about where it applies. This structure ensures that agents can understand not just what to do but why, which helps them apply the principle correctly in novel situations rather than pattern-matching narrowly against the examples.

Design Decisions occupy a different lifecycle than Agent Rules. Agent Rules are relatively constant; they are updated when WSD improves its defaults or when a new model quirk is discovered. Design Decisions are in constant flux during active development, stabilizing as the project matures. A project in its first month might add two or three design decisions per week. A project in its sixth month might add one per month. The document is a living record of the project’s architectural evolution.

The separation between Agent Rules and Design Decisions reflects a real distinction. “Do not use terminal commands to write files” is a behavioral rule: it applies to every project and every task. “Use data-driven designs for language-specific behavior” is an architectural decision: it is specific to this project and emerged from this project’s experience. Mixing them in one file conflates two different kinds of guidance with two different lifecycles, making both harder to maintain.

Standards: On-Demand Expertise

docs/read-only/standards/ contains focused, task-specific guidance distributed across multiple files. Python coding conventions, test isolation requirements, TypeScript standards, and process integrity standards each live in their own file, addressing a narrow domain and loadable independently.

This directory embodies the context engineering principle in its most direct form. The standards are not loaded into every session wholesale. They are loaded on demand, selected for relevance to the current workscope. An agent writing Python code gets the Python standards. An agent writing tests gets the test isolation standards. An agent writing documentation gets neither — those standards would be noise for a documentation task. The preparation agents handle this selective loading as part of the /wsd:prepare phase.

The standards directory is designed to grow over time. As a project encounters new types of work (API design, integration testing, deployment pipelines), new standards files can be created to capture the conventions that should govern that type of work. The directory’s structure accommodates unlimited growth without any single file becoming unwieldy, because each file stays focused on its domain.

This is the opportunity that monolithic rules files fundamentally miss. When everything lives in one file, every rule is loaded into every session. There is no mechanism for relevance-based selection. The developer who adds detailed TypeScript testing standards to their rules file has just added noise to every Python workscope. The developer who adds detailed Python formatting preferences has just diluted every TypeScript session. With a standards directory, each type of guidance lives in its own file and is surfaced only when relevant. Less noise means better adherence.4

System Definitions: The Shared Framework

The docs/read-only/*-System.md files define the WSD platform itself. Agent-System.md establishes the agent coordination model: who the agents are, how they communicate, what authority each holds. Workscope-System.md defines the workscope lifecycle and selection algorithm. Checkboxlist-System.md specifies the task tracking conventions. Documentation-System.md explains the directory hierarchy and document lifecycle.

These files are a different kind of guardrail. Rather than telling agents what to do or not do, they provide a shared conceptual framework, a vocabulary and set of procedures that all agents understand. When the Task-Master creates a workscope file, it follows the format defined in Workscope-System.md. When the Context-Librarian searches for relevant documents, it navigates the hierarchy defined in Documentation-System.md. When a User Agent maintains a work journal, it follows the conventions established in Agent-System.md.

System definitions are loaded during the /wsd:init initialization phase and provide the foundation on which everything else operates. They are WSD’s contribution, the platform infrastructure that you benefit from without needing to author or maintain. As WSD evolves, these files are updated to reflect platform improvements, and your project automatically inherits the new capabilities.

The AI Harness: Platform Independence

Every AI coding tool provides its own mechanism for configuration — CLAUDE.md for Claude Code, Cursor’s .cursor/rules/ directory, and whatever the next tool will introduce. WSD does not interfere with these platform-native files. You configure your AI harness however it recommends, and WSD’s own guidelines live in separate, portable locations.

This separation is deliberate. WSD’s guardrails are platform-agnostic, designed to survive a transition from one AI tool to another without rewriting your rules infrastructure. Your Agent Rules, Design Decisions, standards, and system definitions are all plain Markdown files that any AI assistant can read. If you switch tools tomorrow, you bring your entire rules architecture with you. Only the harness-specific configuration needs to change.

The platform-native file also serves a different function than WSD’s guardrails. It typically handles tool-specific configuration (build commands, test commands, project structure hints, workspace settings) that is relevant to the AI harness’s own operations. WSD’s guidelines handle the higher-level concerns: behavioral rules, architectural decisions, quality standards, and workflow coordination. The two layers complement each other without overlapping.

How Enforcement Applies

A distributed rules architecture would be meaningless without enforcement. The Agent Team guide describes the three-layer enforcement model — pre-loading expectations, post-execution compliance checking, and the threat of rejection — that gives the architecture its teeth. Every rule, regardless of where it lives in the architecture, is a rule that will be taught, checked, and enforced.

The distributed architecture actually makes enforcement more effective. Because standards are loaded selectively, the Rule-Enforcer’s audit is focused on the guidelines relevant to the current work, not diluted by checking compliance against rules that do not apply. An agent that just wrote Python code is audited against Python standards, not TypeScript standards. The selective loading that reduces noise during execution also reduces noise during enforcement, making both more accurate.

Growing Your Rules Over Time

The rules architecture is not a system you configure once and forget. It evolves with your project, and the evolution follows a natural pattern.

In a new project, Agent Rules provide the foundation — you start with WSD’s defaults and may customize the project-specific sections. Design Decisions is empty. The standards directory contains WSD’s starting set. Over the first weeks of development, a feedback loop emerges: agents produce work, you notice patterns you want to correct, and you encode those corrections in the appropriate location. An architectural preference goes into Design Decisions via /add-dd. A behavioral constraint goes into the project-specific section of the Agent Rules. A language-specific convention might warrant a new standards file.

This feedback loop is the mechanism by which your project’s rules infrastructure matures. Each correction you encode is read by every future agent session. The agents do not remember your corrections from previous sessions — they start fresh each time, but the documented rules persist. Over time, the gap between what agents produce and what you expect narrows, and the corrections become less frequent. The rules architecture becomes a codified record of your project’s conventions, maintained through incremental refinement rather than upfront specification.

Writing effective rules requires a specific discipline. A good rule is complete enough to follow without ambiguity but concise enough to respect the agent’s context window. Project-level philosophies (“propagate optional parameters through call chains”) are more valuable than coding nit-picks (“use camelCase for variables”). The best rules emerge from real failures: when an agent produces work that misses a convention, debrief to identify the root cause, then encode the fix as a rule or decision that prevents recurrence.

Beyond rules and decisions, you can customize the agent definitions themselves. The files in .claude/agents/ define each Special Agent’s personality, responsibilities, and instructions. Adjusting these definitions changes how agents approach their work — a particularly sticky rule can be emphasized between the Project-Bootstrapper and the Rule-Enforcer, where these customizations are also preserved through WSD updates via the tag system.

The rules architecture defines the static infrastructure that shapes agent behavior. But your daily experience with WSD involves something more dynamic: two fundamentally different modes of engaging with the system that demand different things from you. The specifications that populate your rules, features, and task lists emerge from one mode of working; the automated execution that turns them into code operates in another. The next guide names this distinction and explores what it means for how you allocate your attention.

Footnotes

  1. Anthropic, “Memory — Claude Code,” Anthropic Documentation. [Online]. Available: https://docs.anthropic.com/en/docs/claude-code/memory

  2. “agents.md,” GitHub. [Online]. Available: https://github.com/agentsmd/agents.md

  3. Anysphere, “Rules — Cursor,” Cursor Documentation. [Online]. Available: https://cursor.com/docs/rules

  4. N. F. Liu et al., “Lost in the Middle: How Language Models Use Long Contexts,” Trans. Assoc. Comput. Linguist., vol. 12, pp. 157-173, 2024. Models attend unevenly to information based on position within a long context, with significant degradation for content in the middle. 2

  5. N. Shapira et al., “Agents of Chaos,” arXiv:2602.20021, Feb. 2026. [Online]. Available: https://arxiv.org/abs/2602.20021

  6. A. Hunt and D. Thomas, The Pragmatic Programmer, 20th Anniversary ed. Boston, MA: Addison-Wesley, 2019.

  7. R. C. Martin, Agile Software Development: Principles, Patterns, and Practices. Upper Saddle River, NJ: Prentice Hall, 2002.

  8. K. Beck, Extreme Programming Explained: Embrace Change, 2nd ed. Boston, MA: Addison-Wesley, 2004.

  9. Attributed to Kelly Johnson, Lockheed Skunk Works, ca. 1960s. See “KISS Principle,” Wikipedia. [Online]. Available: https://en.wikipedia.org/wiki/KISS_principle