You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When you do control logging, add targeted diagnostic statements preemptively when investigating bugs. Fifteen minutes writing specific log output beats hours of speculation. The agent can guide what to log based on its hypothesis—then analyze the new output immediately.
32
32
33
+
This insight transforms the debugging economics: AI makes it trivial to add diagnostic logs at dozens of strategic points—far more volume than humans would ever instrument manually—because the agent can generate and place them in minutes. Once the bug is verified fixed, the same agent systematically removes all temporary diagnostic statements, restoring code hygiene and baseline logging practices. What would be prohibitively tedious instrumentation for humans (add logs, analyze, remove logs) becomes a routine part of AI-assisted investigation, shifting debugging from "minimal instrumentation" to "evidence-rich exploration."
34
+
33
35
## Reproduction Scripts: Code is Cheap
34
36
35
37
When code inspection and log analysis aren't sufficient—when you need bulletproof evidence or must reproduce complex state/timing conditions—reproduction scripts become invaluable. This is where AI agents' massive code generation capabilities shine: environments that take humans hours to set up (K8s, Docker configs, database snapshots, mock services, state initialization) take AI minutes to generate.
Copy file name to clipboardExpand all lines: website/docs/practical-techniques/lesson-7-planning-execution.md
+12Lines changed: 12 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -164,6 +164,16 @@ Watch for these phrases during plan review—they signal the agent is inventing
164
164
- "Build error handling logic..." (What about existing error patterns?)
165
165
- "Add validation for..." (Check for existing validation schemas first)
166
166
167
+
## Checkpointing: Your Safety Net
168
+
169
+
Agents make mistakes frequently—especially while you're learning effective grounding and prompting patterns. The good news: as your skills improve, the need for rollbacks decreases dramatically. You'll naturally write better prompts, catch issues during plan review, and guide agents more effectively. But even experienced practitioners value checkpointing as a safety net. The difference between a frustrating session and a productive one comes down to how quickly you can roll back when things go wrong. Agentic coding is probabilistic—you need the ability to revert both conversation context and code changes when execution diverges from your intent.
170
+
171
+
Establish a checkpoint rhythm: create a restore point before risky operations, let the agent execute, validate results, then keep or revert. Modern AI coding tools (Claude Code, Cursor, VS Code Copilot, etc) include built-in checkpointing features that make rollback seamless—this lets you experiment aggressively without gambling on irreversible changes. If your tool lacks checkpointing, commit far more frequently than traditional development: after each successful increment, before risky operations, when changing direction, after manual corrections. This creates a safety net of verified checkpoints where each commit represents a known-good state you can return to instantly. The validation phase (covered in [Lesson 9](./lesson-9-reviewing-code.md)) determines whether you keep or discard changes—checkpointing makes that decision reversible.
172
+
173
+
:::tip Claude Code Checkpoints
174
+
Press ESC twice to create a checkpoint in Claude Code. This saves both conversation context and code state, letting you experiment aggressively and revert instantly if needed.
175
+
:::
176
+
167
177
## Autonomous Execution: Parallel Workflows and Tool Setup
168
178
169
179
Once the plan is reviewed and grounding is solid, you can let the agent execute autonomously. For complex features, parallel execution across multiple agent instances dramatically accelerates development.
@@ -257,6 +267,8 @@ Pragmatism beats purism. These are all just tools—choose based on efficiency,
257
267
258
268
-**Watch for invention over reuse during plan review** - Agents default to generating plausible code from training patterns instead of discovering existing code. Red flags: "create new utility," "implement helper." Intervention: Force discovery first with evidence requirements before allowing implementation.
259
269
270
+
-**Checkpoint before execution, commit after validation** - Use built-in checkpointing features when available (Claude Code, Copilot, Cursor). Without them, commit far more frequently than traditional development—after each successful increment, before risky operations. Agents make frequent mistakes; checkpointing makes iteration fast and reversible.
271
+
260
272
-**Git worktrees enable true parallel agent workflows** - Multiple working directories, separate branches, isolated agent contexts. Run 3 agent instances on different features simultaneously with zero interference.
261
273
262
274
-**Mix CLI and UI tools pragmatically** - IDEs for navigation, viewing and quick edits, CLI for refactors and parallel session management. Use the best tool for each task, not ideology.
Copy file name to clipboardExpand all lines: website/docs/practical-techniques/lesson-9-reviewing-code.md
+12Lines changed: 12 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,18 @@ This is the **Validate** phase from [Lesson 3's four-phase workflow](../methodol
10
10
11
11
The key insight: **review in a fresh context, separate from where the code was written.** This prevents confirmation bias and leverages the stateless nature of agents from [Lessons 1](../understanding-the-tools/lesson-1-intro.md) and [2](../understanding-the-tools/lesson-2-understanding-agents.md). An agent reviewing its own work in the same conversation will defend its decisions. An agent in a fresh context analyzes objectively, without attachment to prior choices.
12
12
13
+
:::info Agent-Only vs Mixed Codebases: A Critical Distinction
14
+
15
+
The same engineering standards—DRY, YAGNI, architecture, maintainability, readability—apply to all codebases. What differs is coding style optimization and the review process:
16
+
17
+
**Agent-only codebases** are maintained exclusively by AI with minimal human intervention at the code level. Optimize coding style slightly toward AI clarity: more explicit type annotations, slightly more verbose documentation, detailed architectural context files ([Lesson 6](./lesson-6-project-onboarding.md)). Review question: "Will an agent understand this 6 months from now?"
18
+
19
+
**Mixed codebases** balance human and AI collaboration where both work directly with code. Optimize coding style for human brevity while maintaining AI navigability. **Most production codebases fall into this category.**
20
+
21
+
**Critical difference in mixed codebases:** Add a manual review step where you fully read and audit AI-generated code before committing to ensure human readability. This is non-negotiable—without explicit project rules guiding style, agents generate code following patterns from their training data that may not match your team's readability standards. Tune your project rules ([Lesson 6](./lesson-6-project-onboarding.md)) to guide agents toward the writing style humans expect, then verify the output meets those expectations.
22
+
23
+
:::
24
+
13
25
## The Review Prompt Template
14
26
15
27
This template integrates techniques from [Lesson 4: Prompting 101](../methodology/lesson-4-prompting-101.md). Understanding **why** each element exists lets you adapt this pattern for other review tasks (security audits, performance analysis, architectural review).
0 commit comments