Skip to content

Commit 07d64ea

Browse files
ofriwclaude
andcommitted
Update prompt templates with corrected lesson references
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
1 parent ed9966e commit 07d64ea

File tree

5 files changed

+7
-7
lines changed

5 files changed

+7
-7
lines changed

website/prompts/code-review/comprehensive-review.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ import ComprehensiveReview from '@site/shared-prompts/\_comprehensive-review.mdx
1111

1212
**Why four-category framework works:** [Persona directive](/docs/methodology/lesson-4-prompting-101#assigning-personas) ("expert code reviewer") biases vocabulary toward critical analysis instead of descriptive summarization—"violates single responsibility" vs "this function does multiple things." Explicit change description (`$DESCRIBE_CHANGES`) anchors [grounding](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality) by framing intent, enabling detection of misalignment between goals and execution (intended to add caching, actually introduced side effects). Sequential numbered structure implements [Chain-of-Thought](/docs/methodology/lesson-4-prompting-101#chain-of-thought-paving-a-clear-path) reasoning across review dimensions, preventing premature conclusions—can't evaluate maintainability without first understanding architecture. Grounding directive ("Use ChunkHound") forces actual codebase investigation instead of hallucinating patterns from training data. "DO NOT EDIT" [constraint](/docs/methodology/lesson-4-prompting-101#constraints-as-guardrails) maintains separation between review and implementation phases, preventing premature fixes before comprehensive analysis. Four categories ensure systematic coverage: Architecture (structural correctness, pattern conformance, module boundaries), Code Quality (readability, style consistency, KISS adherence), Maintainability (future LLM comprehension, documentation sync, intent clarity), UX (meaningful enhancements, simplicity-value balance).
1313

14-
**When to use—critical fresh-context requirement:** Always run in [fresh context](/docs/understanding-the-tools/lesson-2-understanding-agents#the-stateless-advantage) separate from where code was written—agents reviewing their own work in the same conversation defend prior decisions rather than providing objective analysis (confirmation bias from accumulated context). Use after implementation completion (Execute phase done), post-refactoring (architecture changes), pre-PR submission ([Validate phase](/docs/methodology/lesson-3-high-level-methodology#the-four-phase-workflow)). Critical: be specific with `$DESCRIBE_CHANGES`—vague descriptions ("fix bugs", "update code") prevent alignment analysis between intent and implementation; effective descriptions specify the architectural goal ("add Redis caching layer to user service", "refactor authentication to use JWT tokens"). Review is iterative: review in fresh context → fix issues → run tests → re-review in new conversation → repeat until green light or diminishing returns. Stop iterating when tests pass and remaining feedback is minor (3-4 cycles max)—excessive iteration introduces review-induced regressions where fixes address critique without improving functionality.
14+
**When to use—critical fresh-context requirement:** Always run in [fresh context](/docs/fundamentals/lesson-2-how-agents-work#the-stateless-advantage) separate from where code was written—agents reviewing their own work in the same conversation defend prior decisions rather than providing objective analysis (confirmation bias from accumulated context). Use after implementation completion (Execute phase done), post-refactoring (architecture changes), pre-PR submission ([Validate phase](/docs/methodology/lesson-3-high-level-methodology#the-four-phase-workflow)). Critical: be specific with `$DESCRIBE_CHANGES`—vague descriptions ("fix bugs", "update code") prevent alignment analysis between intent and implementation; effective descriptions specify the architectural goal ("add Redis caching layer to user service", "refactor authentication to use JWT tokens"). Review is iterative: review in fresh context → fix issues → run tests → re-review in new conversation → repeat until green light or diminishing returns. Stop iterating when tests pass and remaining feedback is minor (3-4 cycles max)—excessive iteration introduces review-induced regressions where fixes address critique without improving functionality.
1515

1616
**Prerequisites:** [Code research capabilities](https://chunkhound.github.io/) (semantic search across codebase, architectural pattern discovery, implementation reading), access to git working tree changes (`git diff`, `git status`), project architecture documentation (CLAUDE.md, AGENTS.md, README). Requires explicit description of intended changes (`$DESCRIBE_CHANGES`), access to both changed files and surrounding context for pattern conformance. Agent provides structured feedback by category with file paths, line numbers, specific issues, and actionable recommendations ([evidence requirements](/docs/practical-techniques/lesson-7-planning-execution#require-evidence-to-force-grounding)). [Adapt pattern for specialized reviews](/docs/practical-techniques/lesson-9-reviewing-code#mechanisms-at-work): security (attack surface mapping/input validation boundaries/authentication flows/credential handling/OWASP Top 10), performance (algorithmic complexity/database query efficiency/memory allocation/I/O operations/caching strategy), accessibility (semantic HTML structure/keyboard navigation/ARIA labels/screen reader compatibility/color contrast ratios), API design (REST conventions/error responses/versioning/backwards compatibility).
1717

website/prompts/debugging/evidence-based-debug.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ import EvidenceBasedDebug from '@site/shared-prompts/\_evidence-based-debug.mdx'
1111

1212
**Why evidence requirements prevent hallucination:** "Provide evidence (file paths, line numbers, actual values)" is an explicit [grounding directive](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality)—agents cannot provide that evidence without retrieving it from the codebase. Without evidence requirements, agents produce pattern completion from training data ("probably a database timeout"), not analysis. Evidence forces codebase reading, execution tracing, and concrete citations. INVESTIGATE/ANALYZE/EXPLAIN numbered steps implement [Chain-of-Thought](/docs/methodology/lesson-4-prompting-101#chain-of-thought-paving-a-clear-path), forcing sequential analysis (can't jump to "root cause" without examining error context first). "Use the code research" is explicit retrieval directive—prevents relying on training patterns. Fenced code block preserves error formatting and prevents LLM from interpreting failure messages as instructions. Good evidence includes file paths with line numbers, actual variable/config values, specific function names, and complete stack traces—not vague assertions.
1313

14-
**When to use—fresh context requirement:** Production errors with stack traces/logs, unexpected behavior in specific scenarios, silent failures requiring code path tracing, performance bottlenecks needing profiling analysis, architectural issues spanning multiple files. Critical: use in separate conversation from implementation for unbiased analysis. This diagnostic pattern prevents "cycle of self-deception" where agents defend their own implementation. Running in [fresh context](/docs/understanding-the-tools/lesson-2-understanding-agents#the-stateless-advantage) provides objective analysis without prior assumptions. Always provide complete error output—truncated logs prevent accurate diagnosis. Challenge agent explanations when they don't fit observed behavior: "You said X causes timeout, but logs show connection established. Explain this discrepancy with evidence." Reject guesses without citations: "Show me the file and line number where this occurs."
14+
**When to use—fresh context requirement:** Production errors with stack traces/logs, unexpected behavior in specific scenarios, silent failures requiring code path tracing, performance bottlenecks needing profiling analysis, architectural issues spanning multiple files. Critical: use in separate conversation from implementation for unbiased analysis. This diagnostic pattern prevents "cycle of self-deception" where agents defend their own implementation. Running in [fresh context](/docs/fundamentals/lesson-2-how-agents-work#the-stateless-advantage) provides objective analysis without prior assumptions. Always provide complete error output—truncated logs prevent accurate diagnosis. Challenge agent explanations when they don't fit observed behavior: "You said X causes timeout, but logs show connection established. Explain this discrepancy with evidence." Reject guesses without citations: "Show me the file and line number where this occurs."
1515

1616
**Prerequisites:** [Code research capabilities](https://chunkhound.github.io/) (deep codebase exploration via multi-hop semantic search, query expansion, and iterative follow-ups), file system access for reading implementation and configuration, complete error messages/stack traces/logs (not truncated output), optionally: file paths or function names if known. Verify all cited file paths and line numbers—agents can hallucinate locations. Use engineering judgment to validate reasoning—LLMs complete patterns, not logic. [Adapt pattern for other diagnostics](/docs/practical-techniques/lesson-10-debugging#the-closed-loop-debugging-workflow): performance issues (metrics/thresholds/profiling data), security vulnerabilities (attack vectors/boundaries/configuration gaps), deployment failures (infrastructure logs/expected vs actual state), integration issues (API contracts/data flow/boundary errors).
1717

website/prompts/pull-requests/dual-optimized-pr.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ import DualOptimizedPR from '@site/shared-prompts/\_dual-optimized-pr.mdx';
99

1010
### Overview
1111

12-
**Why dual-audience optimization works:** [Sub-agents](/docs/understanding-the-tools/lesson-2-understanding-agents#the-stateless-advantage) conserve context by spawning separate agents to explore git history—without this delegation, 20-30 file changes consume 40K+ tokens, pushing critical constraints into the [attention curve's ignored middle](/docs/methodology/lesson-5-grounding#the-scale-problem-context-window-limits). [Multi-source grounding](/docs/methodology/lesson-5-grounding#production-pattern-multi-source-grounding) combines ArguSeek (PR best practices from GitHub docs and engineering blogs) with ChunkHound (project-specific architecture patterns, module responsibilities), preventing generic advice divorced from your codebase realities. The "co-worker" [persona framing](/docs/methodology/lesson-4-prompting-101#assigning-personas) with explicit style [constraints](/docs/methodology/lesson-4-prompting-101#constraints-as-guardrails) (direct, concise, assume competence, skip obvious) prevents verbose explanations that waste reviewer attention. Dual constraints balance audiences: "1-3 paragraphs max" for humans prevents overwhelming maintainers with walls of text, while "explain efficiently" keeps AI context comprehensive but structured—critical because [AI reviewers](/prompts/pull-requests/ai-assisted-review) need architectural context (file relationships, module boundaries) that humans infer from experience.
12+
**Why dual-audience optimization works:** [Sub-agents](/docs/fundamentals/lesson-2-how-agents-work#the-stateless-advantage) conserve context by spawning separate agents to explore git history—without this delegation, 20-30 file changes consume 40K+ tokens, pushing critical constraints into the [attention curve's ignored middle](/docs/methodology/lesson-5-grounding#the-scale-problem-context-window-limits). [Multi-source grounding](/docs/methodology/lesson-5-grounding#production-pattern-multi-source-grounding) combines ArguSeek (PR best practices from GitHub docs and engineering blogs) with ChunkHound (project-specific architecture patterns, module responsibilities), preventing generic advice divorced from your codebase realities. The "co-worker" [persona framing](/docs/methodology/lesson-4-prompting-101#assigning-personas) with explicit style [constraints](/docs/methodology/lesson-4-prompting-101#constraints-as-guardrails) (direct, concise, assume competence, skip obvious) prevents verbose explanations that waste reviewer attention. Dual constraints balance audiences: "1-3 paragraphs max" for humans prevents overwhelming maintainers with walls of text, while "explain efficiently" keeps AI context comprehensive but structured—critical because [AI reviewers](/prompts/pull-requests/ai-assisted-review) need architectural context (file relationships, module boundaries) that humans infer from experience.
1313

1414
**When to use—workflow integration:** Before submitting PRs with complex changesets (10+ files, multiple modules touched, cross-cutting concerns) or cross-team collaboration where reviewers lack deep module familiarity. Integrate into [four-phase workflow](/docs/methodology/lesson-3-high-level-methodology#the-four-phase-workflow): complete implementation → validate with tests → self-review for issues → fix discovered problems → generate dual descriptions → submit PR with both files. Be specific with `$CHANGES_DESC`—vague descriptions ("fix bugs", "update logic") produce generic output because [grounding](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality) requires concrete intent. Without specific change description, agent has no anchor to evaluate "what matters" in the git diff. Critical: if you manually edit generated descriptions post-generation, regenerate BOTH files—stale context in AI-optimized description causes [hallucinations during review](/docs/practical-techniques/lesson-9-reviewing-code) when architectural explanations contradict actual changes. For teams without AI reviewers yet, human-optimized output alone provides concise summaries that respect reviewer time.
1515

@@ -22,7 +22,7 @@ import DualOptimizedPR from '@site/shared-prompts/\_dual-optimized-pr.mdx';
2222

2323
### Related Lessons
2424

25-
- **[Lesson 2: Understanding Agents](/docs/understanding-the-tools/lesson-2-understanding-agents#the-stateless-advantage)** - Sub-agents, task delegation, context conservation
25+
- **[Lesson 2: Agents Demystified](/docs/fundamentals/lesson-2-how-agents-work#the-stateless-advantage)** - Sub-agents, task delegation, context conservation
2626
- **[Lesson 4: Prompting 101](/docs/methodology/lesson-4-prompting-101#assigning-personas)** - Persona, constraints, attention curves
2727
- **[Lesson 5: Grounding](/docs/methodology/lesson-5-grounding#production-pattern-multi-source-grounding)** - Multi-source grounding, preventing hallucination
2828
- **[Lesson 9: Reviewing Code](/docs/practical-techniques/lesson-9-reviewing-code)** - Dual-audience optimization, PR workflows, AI-assisted review

website/prompts/testing/edge-case-discovery.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,9 @@ import EdgeCaseDiscovery from '@site/shared-prompts/\_edge-case-discovery.mdx';
99

1010
### Overview
1111

12-
**Why two-step pattern prevents generic advice:** Step 1 loads concrete constraints—agent searches for function, reads implementation, finds existing tests. This populates context with actual edge cases from your codebase ("OAuth users skip email verification," "admin users bypass rate limits"). Step 2 identifies gaps—with implementation in context, agent analyzes what's NOT tested rather than listing generic test categories. [Grounding directives](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality) force codebase search before suggesting tests. Existing tests reveal coverage patterns and domain-specific edge cases. Implementation details expose actual failure modes, not hypothetical ones. Prevents [specification gaming](/docs/practical-techniques/lesson-8-tests-as-guardrails#the-three-context-workflow) by discovering edge cases separately from implementation—similar to [fresh context requirement](/docs/understanding-the-tools/lesson-2-understanding-agents#the-stateless-advantage) for objective analysis.
12+
**Why two-step pattern prevents generic advice:** Step 1 loads concrete constraints—agent searches for function, reads implementation, finds existing tests. This populates context with actual edge cases from your codebase ("OAuth users skip email verification," "admin users bypass rate limits"). Step 2 identifies gaps—with implementation in context, agent analyzes what's NOT tested rather than listing generic test categories. [Grounding directives](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality) force codebase search before suggesting tests. Existing tests reveal coverage patterns and domain-specific edge cases. Implementation details expose actual failure modes, not hypothetical ones. Prevents [specification gaming](/docs/practical-techniques/lesson-8-tests-as-guardrails#the-three-context-workflow) by discovering edge cases separately from implementation—similar to [fresh context requirement](/docs/fundamentals/lesson-2-how-agents-work#the-stateless-advantage) for objective analysis.
1313

14-
**When to use—research-first requirement:** Before implementing new features (discover existing patterns), test-driven development (identify edge cases before implementation), increasing coverage (find gaps in existing suites), refactoring legacy code (understand implicit edge case handling), code review (validate PRs include relevant tests). Critical: Don't skip Step 1—asking directly "what edge cases should I test?" produces generic advice without codebase grounding. Be specific in Step 2 with domain-relevant categories (see examples in prompt). If you generate edge cases and implementation in same conversation, tests will match implementation assumptions—use this pattern in [fresh context](/docs/understanding-the-tools/lesson-2-understanding-agents#the-stateless-advantage) or before implementation. Without [grounding](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality), agents hallucinate based on training patterns instead of analyzing your actual code.
14+
**When to use—research-first requirement:** Before implementing new features (discover existing patterns), test-driven development (identify edge cases before implementation), increasing coverage (find gaps in existing suites), refactoring legacy code (understand implicit edge case handling), code review (validate PRs include relevant tests). Critical: Don't skip Step 1—asking directly "what edge cases should I test?" produces generic advice without codebase grounding. Be specific in Step 2 with domain-relevant categories (see examples in prompt). If you generate edge cases and implementation in same conversation, tests will match implementation assumptions—use this pattern in [fresh context](/docs/fundamentals/lesson-2-how-agents-work#the-stateless-advantage) or before implementation. Without [grounding](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality), agents hallucinate based on training patterns instead of analyzing your actual code.
1515

1616
**Prerequisites:** [Code research capabilities](https://chunkhound.github.io/) (deep codebase exploration via multi-hop semantic search, query expansion, and iterative follow-ups), access to implementation files and existing test suites, function/module name to analyze. After Step 1, agent provides implementation summary with file paths, currently tested edge cases with evidence from test files, special handling logic and conditional branches. After Step 2, agent identifies untested code paths with line numbers, missing edge case coverage with concrete examples from your domain, potential failure modes based on implementation analysis.
1717

0 commit comments

Comments
 (0)