Skip to content

Commit d5e737c

Browse files
ofriwclaude
andcommitted
Add Lesson 11: Agent-Friendly Code
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
1 parent 60e6c60 commit d5e737c

File tree

6 files changed

+1237
-0
lines changed

6 files changed

+1237
-0
lines changed
Lines changed: 200 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,200 @@
1+
---
2+
sidebar_position: 6
3+
sidebar_label: 'Lesson 11: Agent-Friendly Code'
4+
title: 'Lesson 11: Writing Agent-Friendly Code'
5+
---
6+
7+
import CompoundQualityVisualization from '@site/src/components/VisualElements/CompoundQualityVisualization';
8+
9+
Agents amplify patterns—good or bad. Clean code generates more clean code. Scattered logic generates more scattered logic. Research confirms this: AI-generated code contains **8x more duplicated blocks** than human-written code[^1]. Agents don't create duplication—they amplify existing patterns they observe during code research.
10+
11+
**Critical caveat:** Even with perfect patterns, agents are stochastic systems—LLM token generation is probabilistic, not deterministic. Confabulations, hallucinations, and subtle errors occur randomly, regardless of pattern quality. High entropy states (complex refactors, cross-cutting changes) increase error probability. Your job isn't to prevent all errors—that's impossible with probabilistic systems—it's to **actively reject errors during review** to prevent them from entering the compounding cycle.
12+
13+
Every piece of code you accept today becomes pattern context for tomorrow's agents. This creates exponential quality curves—upward or downward. You control the direction.
14+
15+
## The Compounding Mechanism
16+
17+
During code research ([Lesson 5](/docs/methodology/lesson-5-grounding)), agents grep for patterns, read implementations, and load examples into context. The code they find becomes the pattern context for generation.
18+
19+
<CompoundQualityVisualization />
20+
21+
### Two Sources of Quality Drift
22+
23+
Your code quality degrades in two fundamentally different ways when working with AI:
24+
25+
**1. The Copy Machine Effect (Predictable Amplification)**
26+
27+
Agents find existing code and use it as examples. If your codebase has duplication, the agent learns "this is how we do things here" and creates more duplication. If tests are missing in similar files, the agent generates code without tests. If error handling is inconsistent, the agent produces inconsistent error handling.
28+
29+
This is predictable: show the agent messy patterns, get messy code back. The agent isn't being creative—it's pattern-matching what already exists.
30+
31+
**2. The Dice Roll (Random AI Errors)**
32+
33+
LLMs are probabilistic systems—they generate code through weighted randomness, not logical reasoning. Even when your codebase is pristine, the AI randomly produces errors:
34+
35+
- **Making things up:** References a `getUserProfile()` function that doesn't exist, imports from files that aren't there
36+
- **Complexity breaks down:** Simple tasks work fine, but multi-file refactors or complex state management increase the chance of mistakes
37+
- **Model quirks:** Different model versions, context limits, and attention patterns create unpredictable variance
38+
39+
You can't eliminate these random errors with better prompts or cleaner patterns—they're inherent to how LLMs work.
40+
41+
**Why This Matters:**
42+
43+
Both problems feed the same exponential curve. When you accept a random AI error during code review, it becomes a pattern that gets copied. One hallucinated API call in iteration 1 becomes the template for 5 similar hallucinations by iteration 3.
44+
45+
**Your Critical Role:** Code review ([Lesson 9](/docs/practical-techniques/lesson-9-reviewing-code)) is where you break the cycle. Reject bad patterns before they multiply. Reject random errors before they become patterns. Every acceptance decision affects every future generation.
46+
47+
## Co-locate Related Constraints
48+
49+
From [Lesson 5](/docs/methodology/lesson-5-grounding), agents discover your codebase through **agentic search**—Grep, Read, Glob. **Agents only see code they explicitly find.** When constraints scatter across files, search determines what the agent sees and what it misses.
50+
51+
**❌ Anti-pattern (scattered constraints):**
52+
53+
```typescript
54+
// File: services/auth.ts
55+
function createUser(email: string, password: string) {
56+
return db.users.insert({ email, password: hashPassword(password) })
57+
}
58+
59+
// File: config/validation.ts
60+
const MIN_PASSWORD_LENGTH = 12 // ← Agent never searches for this file
61+
```
62+
63+
**What happens:** Agent searches `Grep("createUser")` → reads `services/auth.ts` → generates code accepting 3-character passwords because it never saw `MIN_PASSWORD_LENGTH`.
64+
65+
**✅ Production pattern (co-located constraints):**
66+
67+
```typescript
68+
// File: services/auth.ts
69+
const MIN_PASSWORD_LENGTH = 12 // ← Agent sees this in same file
70+
71+
function createUser(email: string, password: string) {
72+
if (password.length < MIN_PASSWORD_LENGTH) {
73+
throw new Error(`Password must be at least ${MIN_PASSWORD_LENGTH} characters`)
74+
}
75+
return db.users.insert({ email, password: hashPassword(password) })
76+
}
77+
```
78+
79+
**What happens:** Agent searches `Grep("createUser")` → reads `services/auth.ts` → sees `MIN_PASSWORD_LENGTH` in same read → generates code that enforces constraint.
80+
81+
### Semantic Bridges When DRY Requires Separation
82+
83+
When constraints must be shared across modules, create **semantic bridges**—comments with related semantic keywords enabling semantic search and code research tools ([Lesson 5](/docs/methodology/lesson-5-grounding#solution-1-semantic-search)) to discover relationships:
84+
85+
```typescript
86+
// File: shared/constants.ts
87+
// Password strength requirements: minimum character length enforcement
88+
export const MIN_PASSWORD_LENGTH = 12
89+
90+
// File: services/auth.ts
91+
import { MIN_PASSWORD_LENGTH } from '@/shared/constants'
92+
93+
// User credential validation: enforce security constraints
94+
function createUser(email: string, password: string) {
95+
return db.users.insert({ email, password: hashPassword(password) })
96+
}
97+
```
98+
99+
**How semantic bridges work:** Semantic search matches meaning, not exact words. Query "password validation requirements" finds BOTH files because embeddings recognize semantic similarity:
100+
- "password" ≈ "credential"
101+
- "requirements" ≈ "constraints"
102+
- "strength" ≈ "security"
103+
104+
The comments use different words with overlapping meaning—semantic breadcrumbs that connect related concepts across files.
105+
106+
### Automate Through Prompting
107+
108+
Rather than manually managing discoverability strategies, configure your agent to handle this automatically. Add instructions like `"Document inline when necessary"` and `"Match surrounding patterns and style"` to your `CLAUDE.md` or `AGENTS.md` ([Lesson 6](/docs/practical-techniques/lesson-6-project-onboarding)). These phrases make agents automatically add semantic bridge comments during generation, follow existing code conventions, and maintain consistency without explicit oversight. The agent reads your co-located constraints and semantic bridges during code research, then generates new code that follows the same patterns—turning discoverability into a self-reinforcing system rather than manual organizational work.
109+
110+
**Caveat:** You'll need to occasionally remind the agent about these instructions in your task-specific prompts. Due to the [U-shaped attention curve](/docs/methodology/lesson-5-grounding#the-scale-problem-context-window-limits), instructions buried in configuration files can fall into the ignored middle of the context window during long interactions. A quick reminder like "document inline where necessary and match surrounding style" at the end of your prompt ensures these constraints stay in the high-attention zone.
111+
112+
## Comments as Context Engineering: Critical Sections for Agents
113+
114+
**Advanced technique—use sparingly.** In concurrent programming, critical sections protect shared resources through mutual exclusion. Comments can serve a similar role for AI agents, creating "agent-critical sections" that guard sensitive code from accidental modification. Apply this **only** to genuinely high-risk code: authentication/authorization, cryptographic operations, payment processing, database migrations, audit logging, PII handling. Do NOT use for general business logic, CRUD operations, or frequently-changing code. The trade-off: protection creates friction. If every function has "CRITICAL" warnings, the signal becomes noise and legitimate agent work slows down.
115+
116+
When agents research your codebase ([Lesson 5](/docs/methodology/lesson-5-grounding)), they read files and load every comment into their context window. This means comments become prompts. Write them like prompts using techniques from [Lesson 4](/docs/methodology/lesson-4-prompting-101): imperative directives (NEVER, MUST, ALWAYS), explicit negation patterns ("Do NOT X. Instead, always Y"), numbered steps for complex operations (Step 1, Step 2), and concrete consequences. When the agent generates password handling code and reads "NEVER store passwords in plain text" with implementation alternatives, that violation becomes far less likely. You're exploiting prompt injection—the good kind.
117+
118+
```typescript
119+
// Standard comment
120+
// Validates password before storing
121+
function createUser(password: string) {
122+
return db.users.insert({ password })
123+
}
124+
125+
// Critical section (agent barrier)
126+
// === CRITICAL SECURITY SECTION ===
127+
// NEVER store passwords in plain text or weak hashing (MD5, SHA1)
128+
// MUST hash with bcrypt (10+ rounds) BEFORE persistence
129+
// Do NOT modify hashing algorithm without security review
130+
// Violations create CVE-level vulnerabilities
131+
function createUser(password: string) {
132+
if (password.length < 12) {
133+
throw new Error('Password must be at least 12 characters')
134+
}
135+
const hashed = bcrypt.hashSync(password, 10)
136+
return db.users.insert({ password: hashed })
137+
}
138+
```
139+
140+
This creates deliberate friction. An agent tasked with "add OAuth login" will work slower around password hashing code with heavy constraints—it must navigate all those NEVER/MUST directives carefully. That's the protection mechanism: forced caution for critical paths. But overuse is counterproductive. Mark too many functions as CRITICAL and agents struggle with routine work, slowing down legitimate changes as much as dangerous ones. Reserve this technique for code where accidental modification genuinely costs more than the development slowdown.
141+
142+
## The Knowledge Cache Anti-Pattern
143+
144+
You've extracted architectural knowledge from your codebase with an agent—clean diagrams, comprehensive API documentation, detailed component relationships. You save it as `ARCHITECTURE.md` and commit it. Now you have a cache invalidation problem: code changes (always), documentation doesn't (usually), and future agents find both during code research ([Lesson 5](/docs/methodology/lesson-5-grounding)). The diagram below shows the divergence.
145+
146+
```mermaid
147+
sequenceDiagram
148+
participant KB as 🗄️ Codebase<br/>(Persistent)
149+
participant Agent as ⚡ Agent<br/>(Stateless)
150+
151+
rect rgb(167, 139, 250, 0.1)
152+
Note over Agent: 1. RESEARCH
153+
Agent->>KB: Read source code
154+
KB->>Agent: Knowledge extracted
155+
end
156+
157+
alt ✅ Good Path
158+
Note over Agent: Knowledge stays in context
159+
rect rgb(167, 139, 250, 0.1)
160+
Note over Agent: 2. PLAN
161+
Note over Agent: 3. EXECUTE
162+
Agent->>KB: Edit code
163+
Note over KB: Code changes
164+
end
165+
Note over Agent: Done ✓
166+
Note over KB: Code = source of truth
167+
else ❌ Bad Path: Cache Research
168+
Agent->>KB: Save ARCHITECTURE.md
169+
Note over KB: Cache committed
170+
rect rgb(167, 139, 250, 0.1)
171+
Note over Agent: 2. PLAN
172+
Note over Agent: 3. EXECUTE
173+
Agent->>KB: Edit code
174+
Note over KB: Code changes<br/>⚠️ Cache now stale!
175+
end
176+
177+
Note over Agent: Future agent spawns
178+
Agent->>KB: Research (read KB)
179+
KB->>Agent: Finds BOTH:<br/>① Current code<br/>② Outdated cache
180+
Note over Agent: Confusion!
181+
end
182+
```
183+
184+
The moment you commit extracted knowledge, every code change requires documentation updates you'll forget. Source code is your single source of truth—code research tools (ChunkHound, semantic search, Explore) extract architectural knowledge dynamically every time, fresh and accurate. Document decisions and WHY (ADRs, high-level overviews, business domain concepts), not extracted WHAT that code research can regenerate on demand.
185+
186+
## Key Takeaways
187+
188+
- **Agents amplify patterns AND produce stochastic errors** - Good patterns compound into better code. Bad patterns compound into technical debt. **But even with perfect patterns, LLMs produce probabilistic errors** (confabulations, entropy-driven failures) that compound if accepted. Research shows AI code has 8x more duplication because agents amplify existing duplication patterns. Every accepted PR becomes pattern context for future agents—including any stochastic errors you failed to reject.
189+
190+
- **Co-locate constraints, create semantic bridges when necessary** - Scattered code compounds into harder-to-navigate codebases. When separation is required (DRY), use explicit comments pointing to related files.
191+
192+
- **Comments as agent-critical sections (use sparingly)** - For genuinely high-risk code (authentication, cryptography, payments, PII), write comments as prompts using imperative directives (NEVER, MUST, ALWAYS) to create deliberate friction. This protection mechanism guards sensitive code from accidental modification. **Overuse is counterproductive**—if everything is marked CRITICAL, the signal becomes noise and legitimate work slows down.
193+
194+
- **You are the quality circuit breaker** - Code review ([Lesson 9](/docs/practical-techniques/lesson-9-reviewing-code)) prevents negative compounding. Accepting bad patterns lets them enter pattern context for future agents. Rejecting them breaks the negative feedback loop.
195+
196+
- **Avoid knowledge cache anti-patterns** - Code research tools (Explore, ChunkHound, semantic search) extract architectural knowledge dynamically from source code every time you need it. Saving extracted knowledge to .md files creates unnecessary caches that become stale, pollute future grounding with duplicated information, and create impossible cache invalidation problems. Trust the grounding process ([Lesson 5](/docs/methodology/lesson-5-grounding)) to re-extract knowledge on-demand from the single source of truth.
197+
198+
---
199+
200+
[^1]: GitClear (2025) - Analysis of 211 million lines of code (2020-2024) showing 8-fold increase in duplicated code blocks in AI-generated code. Source: [LeadDev: How AI-generated code accelerates technical debt](https://leaddev.com/technical-direction/how-ai-generated-code-accelerates-technical-debt)

website/sidebars.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@ const sidebars: SidebarsConfig = {
4444
'practical-techniques/lesson-8-tests-as-guardrails',
4545
'practical-techniques/lesson-9-reviewing-code',
4646
'practical-techniques/lesson-10-debugging',
47+
'practical-techniques/lesson-11-agent-friendly-code',
4748
],
4849
},
4950
],

0 commit comments

Comments
 (0)