Architectural Constraints for AI Agents: Enforcing Structural Patterns in Generated Code
Tell agents what your architecture actually is, not as docs they'll ignore, but as constraints they can't violate. Layer boundaries, no circular deps, module isolation—make the rules executable and agents follow them automatically.
The Core Problem
Your AI agent just generated code that looks syntactically perfect. It compiles. Tests pass. But it's reached directly into your database layer from a controller. Or created a dependency loop between two modules. Or mixed domain logic with infrastructure concerns.
AI code generators optimize for immediate task completion, not long-term architectural coherence. They can't "see" your system's structure the way a developer with 5 years on the codebase can. Without explicit architectural constraints, your AI-generated code degrades your architecture incrementally, sometimes in ways you don't notice until it's hard to untangle.
Architectural constraints are machine-readable rules that enforce the structural patterns you've decided your system must follow. They're not about syntax or style—they're about where code can live, what it can depend on, and how components interact. They're the difference between having an architecture and just having code that compiles.
Why This Matters
Architectural decay is silent. You can violate DDD layer boundaries once and it's fine. Ten times and you have a mess. A hundred times and your architecture is fiction. AI agents make those violations at scale because they don't care—they're solving for the prompt, not the system.
Developers inherit the mess. Once layer boundaries blur, they blur further. When one developer sees a shortcut work, others follow. Your architecture becomes a suggestion instead of a rule.
Refactoring becomes exponentially harder. A system with mixed concerns is 3x harder to refactor than a system with clear boundaries. Scale to dozens of architectural violations and you're paying interest on technical debt every single day.
Your AI tools are only as good as your constraints. An AI agent working within clear architectural bounds can be more productive than a developer who works faster but creates technical debt. An unconstrained AI agent is worse than a junior developer—at least a junior can be onboarded on architectural patterns.
Explicit architectural constraints do three things: they prevent violations before code enters the repository, they make your architecture a machine-readable fact instead of documentation, and they let you onboard new developers (and AI agents) much faster because "the system enforces this" is clearer than "we do this by convention."
What Architectural Constraints Actually Look Like
A constraint isn't a philosophy. It's a specific, testable rule about what's allowed.
Layer boundary constraints: "Code in the controllers package cannot directly instantiate or import classes from the db package." This isn't about how smart a developer is—it's a structural fact. Your request handlers go through use cases; use cases go through domain logic; domain logic goes through repositories. That's the rule.
Dependency direction constraints: "Modules can only depend on modules at the same level or on lower (more foundational) levels. No upward dependencies." This prevents your infrastructure from depending on your application layer, which is a quick way to create spaghetti.
Module isolation constraints: "Code in the users module cannot import from the billing module. They communicate only through explicit contracts (events, DTOs)." This keeps modules loosely coupled and makes it clear when you've accidentally tightened the coupling.
Naming and symbol constraints: "All domain events must end with 'Event'. All queries must implement the IQuery interface." This is where naming conventions become enforceable rules, not just guidelines in a README.
Circular dependency constraints: "Module A cannot depend on Module B if Module B depends on Module A." Circular dependencies are almost always a sign of wrong abstraction. The constraint prevents them proactively.
Pattern enforcement constraints: "All user-facing endpoints must go through the AuthorizationMiddleware before reaching the handler." This is about enforcing architectural patterns, not just suggesting them.
These aren't theoretical. They're rules about what can import what, what can call what, and where code can live.
Why AI Agents Violate Architecture
An AI agent has no inherent understanding of your architecture. If your codebase is large, the agent might not even see all of it in context. And more importantly: the agent is optimizing for "solve this task," not "maintain this system's structure."
When a developer writes code, they navigate a mental model of the system. They know where to put something because they've thought about whether the presentation layer should know about the database layer (it shouldn't). They know when they're violating a boundary and decide consciously to do it (usually wrong choice) or avoid it (right choice).
An AI agent has no mental model. It has patterns from training data and your context window. If your prompt says "add a feature that shows user billing info on the dashboard," the agent might:
- Directly query the database from the controller (quick, easy, wrong)
- Reach into the billing module from the user module (convenient, violates module boundaries)
- Create a dependency cycle between auth and notification services (happened to see that pattern in similar code, didn't understand it was wrong)
The agent isn't being careless. It's being efficient given no other guidance. Architectural constraints are that guidance. They're how you tell the agent "I don't care how convenient this is, that path is not allowed."
Encoding Constraints as Machine-Readable Rules
Constraints live in three forms: as static rules you check after code generation, as validators that run during code generation (if your tooling supports it), and as runtime guards in some cases.
Static analysis rules check a codebase after the fact. Tools like ArchUnit (for JVM languages) or ESLint with custom rules (for JavaScript) let you specify rules like "classes in this package cannot import from that package" and then fail the build if the rule is violated.
A simple rule in ArchUnit looks like:
classes().that().resideInAPackage("..controllers..")
.should().notDependOnClassesThat().resideInAPackage("..db..")
.check(importedClasses);This is executable. It's not a comment in a design document. If violated, the build fails.
Custom validators are rules specific to your domain. "All financial transactions must have an audit trail." This isn't a language-level architecture rule; it's a business-level structural rule. You write a validator that walks through generated code and checks that any code creating a transaction also writes an audit entry.
if (codeContains(TransactionClass.new)) {
mustContain(AuditLog.record);
}Dependency graph analysis builds a map of what depends on what, then applies rules to that map. This catches subtle violations like "Module A depends on B depends on C depends on A" which are easy to miss in code review.
The practical implementation depends on your language and tooling. For an AI agent system:
- Pre-generation: embed constraints in your context or system prompt ("You will never generate code where X depends on Y")
- Post-generation: run validators that fail if the agent violated a constraint
- CI validation: run comprehensive architecture checks before code merges
Common Architectural Constraints in Practice
Different systems have different architectures, but some constraints show up everywhere.
Presentation → Domain → Infrastructure. Presentation layer can talk to domain logic. Domain logic can't talk to presentation. Infrastructure can be used anywhere below presentation. Nothing reaches directly into infrastructure—it goes through repositories or adapters.
Module isolation. If you've carved your system into modules (orders, users, billing, shipping), code from one module shouldn't import and use classes from another. It should depend on contracts: events, messages, DTOs. This keeps modules replaceable.
No circular dependencies. If A depends on B, then B cannot depend on A. This is enforceable. If you need two modules to communicate, create a third module they both depend on (or use events where A doesn't need to know about B at all).
Naming conventions as rules. All repository classes end with Repository. All validators end with Validator. All events end with Event. This isn't decoration—when your code generator sees the pattern, it knows the semantics. When your validator sees a financial operation, it checks for corresponding AuditEvent. Naming becomes structural information.
Security boundaries. User-facing endpoints must be authenticated. Admin endpoints must be authorized. Sensitive operations must be audited. These aren't optional; they're structural requirements. Code that creates or modifies sensitive data must go through a specific path.
Service boundaries. If you're using microservices, services can't directly depend on each other's internals. They depend on contracts. This is looser than module isolation in a monolith, but the principle is the same.
Enforcement Mechanisms
Rules without enforcement are suggestions. Enforcement happens in two main places.
Static analysis before merge. Violations are caught in CI before code reaches main. ArchUnit, ESLint, custom Python scripts—whatever your language uses. This is the gate. Code that violates architectural constraints doesn't merge. Full stop.
The feedback needs to be specific. Not "architecture violation" but "controllers.UserController cannot import db.UserRepository because presentation layer cannot access data layer directly. Use UserRepository through a service layer (e.g., UserService in the application layer)."
Custom validators in the AI workflow. If you're using an AI agent to generate code, you can validate before the code is even committed. The agent generates a function, a validator checks it against your rules, and if it violates something, the agent either fixes it or you reject the output.
This is faster feedback than waiting for CI. It's like pair programming with the agent. The agent suggests something architectural wrong, you catch it immediately, and it regenerates.
Runtime guards for critical constraints. Some constraints can't be checked statically (e.g., "no N+1 queries," "no direct unencrypted access to sensitive data"). You guard these with runtime checks: middleware that audits requests, query interceptors that log, encryption wrappers that ensure sensitive data is never transmitted raw.
The Difference Between Linting and Architectural Validation
Linting checks syntax and style: "variables should be camelCase," "remove unused imports," "indent with 4 spaces."
Architectural validation checks structure: "this module shouldn't depend on that module," "all repositories must implement this interface," "security operations must go through this handler."
Linting is local—one file at a time. Architectural validation is global—it needs to see the whole system.
Linting is automated and accepted by developers. Architectural validation is often skipped because it's seen as bureaucracy ("let me just fix this one thing to get it working"). This is wrong. Architectural constraints are more important than lint rules because they preserve long-term coherence.
Both are necessary. Lint keeps code readable. Architecture keeps code maintainable.
The AI-Native Angle
AI agents need architectural constraints more than human developers do because they operate without intuition. A human developer builds a mental model of your system. An AI agent processes tokens. These constraints become part of the pre-commit and CI validation pipeline that ensures agents follow the rules.
When you're using AI to generate code at scale—whether it's a code generator integrated into your build pipeline or AI agents augmenting your team—architectural constraints become your control surface. They're how you translate "here's what I value in this system" into rules the AI can't break.
Tools like Bitloops acknowledge this by making constraints first-class citizens: you define rules about what code can do structurally, the system validates generated code against those rules, and violations surface not just the broken rule but context about why the rule exists. It's the difference between "your code violates the architecture" and "presentation layer can't access the database layer directly because it violates our layered architecture—you need to go through the service layer." These constraints are part of a broader compliance framework for AI-native engineering.
FAQ
Doesn't this slow down development?
Short-term, maybe. You're preventing the easy wrong thing. Long-term, it speeds you up because you're not spending weeks refactoring due to architectural debt. And with AI agents, the constraint doesn't slow anything down—it prevents the agent from going down the wrong path in the first place.
What if we need to violate a constraint for a legitimate reason?
Good question. First: examine whether you need to violate the constraint or you need to rethink the constraint. Most "legitimate violations" are signs the constraint is too strict or the architecture is wrong. Second: if you genuinely need to break a rule, you can suppress the rule for that specific case, but make it explicit and documented. "We're breaking the module isolation rule here because X." This is trackable; implicit violations are not.
How do we define constraints for a legacy system that already violates them?
Start with the constraints you want going forward. Run a scan to find existing violations. Decide: fix them, mark them as legacy exceptions, or adjust the constraint. Typically you do a mix. This is why architectural validation is often run in "warning" mode first—you fix the obvious things, then move to strict enforcement on new code.
Can we have constraints that are specific to the AI-generated code?
Absolutely. You might have looser constraints on human-written code (which you trust to make good decisions) and strict constraints on AI-generated code (which you don't). Or different constraints for different parts of the codebase. This is a powerful pattern.
What about constraints that depend on domain knowledge?
That's where custom validators come in. You encode domain knowledge into a validator: "any code that modifies user permissions must also write an audit log." The validator checks not for a syntactic pattern but for semantic compliance.
How do we prevent constraint fatigue, where developers ignore constraints because there are too many?
Keep constraints focused on high-value rules. "No circular dependencies" is worth enforcing. "All variable names must be at least 4 characters long" is just noise. Start with 3-5 core constraints and add more only if they prevent real problems.
What's the difference between architectural constraints and code reviews?
Code reviews are human judgment. Constraints are automated facts. Code reviews catch edge cases and context-dependent decisions. Constraints catch violations of rules you've already decided are non-negotiable. You need both.
Primary Sources
- Comprehensive guide to software design philosophy and architectural constraints. Philosophy of Software Design
- Principles for structuring code around business domain concepts and language. Domain-Driven Design
- Framework for governing AI systems with documentation and architectural constraints. NIST AI RMF
- Supply chain security framework with requirements for code integrity and traceability. SLSA Framework
- Top 10 security risks specific to large language model applications. OWASP Top 10 LLM
- SOC 2 governance criteria for designing security and control architectures. SOC 2 AICPA
More in this hub
Architectural Constraints for AI Agents: Enforcing Structural Patterns in Generated Code
7 / 12Previous
Article 6
Audit Trails for AI-Assisted Development: Compliance by Design
Next
Article 8
Pre-Commit and CI Validation for AI Code: The Two-Stage Enforcement Pipeline
Also in this hub
Get Started with Bitloops.
Apply what you learn in these hubs to real AI-assisted delivery workflows with shared context, traceable reasoning, and architecture-aware engineering practices.
curl -sSL https://bitloops.com/install.sh | bash