Skip to content

Commit f75e815

Browse files
ofriwclaude
andcommitted
Add standalone prompt documentation pages with detailed guidance
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
1 parent 8530553 commit f75e815

File tree

8 files changed

+192
-0
lines changed

8 files changed

+192
-0
lines changed
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
---
2+
title: Comprehensive Code Review
3+
sidebar_position: 1
4+
---
5+
6+
import ComprehensiveReview from '@site/shared-prompts/\_comprehensive-review.mdx';
7+
8+
<ComprehensiveReview />
9+
10+
### Overview
11+
12+
**Why four-category framework works:** [Persona directive](/docs/methodology/lesson-4-prompting-101#assigning-personas) ("expert code reviewer") biases vocabulary toward critical analysis instead of descriptive summarization—"violates single responsibility" vs "this function does multiple things." Explicit change description (`$DESCRIBE_CHANGES`) anchors [grounding](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality) by framing intent, enabling detection of misalignment between goals and execution (intended to add caching, actually introduced side effects). Sequential numbered structure implements [Chain-of-Thought](/docs/methodology/lesson-4-prompting-101#chain-of-thought-paving-a-clear-path) reasoning across review dimensions, preventing premature conclusions—can't evaluate maintainability without first understanding architecture. Grounding directive ("Use ChunkHound") forces actual codebase investigation instead of hallucinating patterns from training data. "DO NOT EDIT" [constraint](/docs/methodology/lesson-4-prompting-101#constraints-as-guardrails) maintains separation between review and implementation phases, preventing premature fixes before comprehensive analysis. Four categories ensure systematic coverage: Architecture (structural correctness, pattern conformance, module boundaries), Code Quality (readability, style consistency, KISS adherence), Maintainability (future LLM comprehension, documentation sync, intent clarity), UX (meaningful enhancements, simplicity-value balance).
13+
14+
**When to use—critical fresh-context requirement:** Always run in [fresh context](/docs/understanding-the-tools/lesson-2-understanding-agents#the-stateless-advantage) separate from where code was written—agents reviewing their own work in the same conversation defend prior decisions rather than providing objective analysis (confirmation bias from accumulated context). Use after implementation completion (Execute phase done), post-refactoring (architecture changes), pre-PR submission ([Validate phase](/docs/methodology/lesson-3-high-level-methodology#the-four-phase-workflow)). Critical: be specific with `$DESCRIBE_CHANGES`—vague descriptions ("fix bugs", "update code") prevent alignment analysis between intent and implementation; effective descriptions specify the architectural goal ("add Redis caching layer to user service", "refactor authentication to use JWT tokens"). Review is iterative: review in fresh context → fix issues → run tests → re-review in new conversation → repeat until green light or diminishing returns. Stop iterating when tests pass and remaining feedback is minor (3-4 cycles max)—excessive iteration introduces review-induced regressions where fixes address critique without improving functionality.
15+
16+
**Prerequisites:** [Code research capabilities](https://chunkhound.github.io/) (semantic search across codebase, architectural pattern discovery, implementation reading), access to git working tree changes (`git diff`, `git status`), project architecture documentation (CLAUDE.md, AGENTS.md, README). Requires explicit description of intended changes (`$DESCRIBE_CHANGES`), access to both changed files and surrounding context for pattern conformance. Agent provides structured feedback by category with file paths, line numbers, specific issues, and actionable recommendations ([evidence requirements](/docs/practical-techniques/lesson-7-planning-execution#require-evidence-to-force-grounding)). [Adapt pattern for specialized reviews](/docs/practical-techniques/lesson-9-reviewing-code#mechanisms-at-work): security (attack surface mapping/input validation boundaries/authentication flows/credential handling/OWASP Top 10), performance (algorithmic complexity/database query efficiency/memory allocation/I/O operations/caching strategy), accessibility (semantic HTML structure/keyboard navigation/ARIA labels/screen reader compatibility/color contrast ratios), API design (REST conventions/error responses/versioning/backwards compatibility).
17+
18+
### Related Lessons
19+
20+
- **[Lesson 3: High-Level Methodology](/docs/methodology/lesson-3-high-level-methodology#the-four-phase-workflow)** - Four-phase workflow (Research > Plan > Execute > Validate) — review is the Validate phase
21+
- **[Lesson 4: Prompting 101](/docs/methodology/lesson-4-prompting-101)** - [Persona directives](/docs/methodology/lesson-4-prompting-101#assigning-personas), [Chain-of-Thought](/docs/methodology/lesson-4-prompting-101#chain-of-thought-paving-a-clear-path), [constraints](/docs/methodology/lesson-4-prompting-101#constraints-as-guardrails), structured prompting
22+
- **[Lesson 5: Grounding](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality)** - ChunkHound for codebase research, preventing hallucination
23+
- **[Lesson 7: Planning & Execution](/docs/practical-techniques/lesson-7-planning-execution#require-evidence-to-force-grounding)** - Evidence requirements, grounding techniques
24+
- **[Lesson 8: Tests as Guardrails](/docs/practical-techniques/lesson-8-tests-as-guardrails#preventing-specification-gaming-the-three-context-workflow)** - Fresh-context validation, preventing confirmation bias through three-context workflow
25+
- **[Lesson 9: Reviewing Code](/docs/practical-techniques/lesson-9-reviewing-code#mechanisms-at-work)** - Iterative review cycles, diminishing returns, mixed vs agent-only codebases
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
---
2+
title: Evidence-Based Debugging
3+
sidebar_position: 1
4+
---
5+
6+
import EvidenceBasedDebug from '@site/shared-prompts/\_evidence-based-debug.mdx';
7+
8+
<EvidenceBasedDebug />
9+
10+
### Overview
11+
12+
**Why evidence requirements prevent hallucination:** "Provide evidence (file paths, line numbers, actual values)" is an explicit [grounding directive](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality)—agents cannot provide that evidence without retrieving it from the codebase. Without evidence requirements, agents produce pattern completion from training data ("probably a database timeout"), not analysis. Evidence forces codebase reading, execution tracing, and concrete citations. INVESTIGATE/ANALYZE/EXPLAIN numbered steps implement [Chain-of-Thought](/docs/methodology/lesson-4-prompting-101#chain-of-thought-paving-a-clear-path), forcing sequential analysis (can't jump to "root cause" without examining error context first). "Use the code research" is explicit retrieval directive—prevents relying on training patterns. Fenced code block preserves error formatting and prevents LLM from interpreting failure messages as instructions. Good evidence includes file paths with line numbers, actual variable/config values, specific function names, and complete stack traces—not vague assertions.
13+
14+
**When to use—fresh context requirement:** Production errors with stack traces/logs, unexpected behavior in specific scenarios, silent failures requiring code path tracing, performance bottlenecks needing profiling analysis, architectural issues spanning multiple files. Critical: use in separate conversation from implementation for unbiased analysis. This diagnostic pattern prevents "cycle of self-deception" where agents defend their own implementation. Running in [fresh context](/docs/understanding-the-tools/lesson-2-understanding-agents#the-stateless-advantage) provides objective analysis without prior assumptions. Always provide complete error output—truncated logs prevent accurate diagnosis. Challenge agent explanations when they don't fit observed behavior: "You said X causes timeout, but logs show connection established. Explain this discrepancy with evidence." Reject guesses without citations: "Show me the file and line number where this occurs."
15+
16+
**Prerequisites:** [Code research capabilities](https://chunkhound.github.io/) (deep codebase exploration via multi-hop semantic search, query expansion, and iterative follow-ups), file system access for reading implementation and configuration, complete error messages/stack traces/logs (not truncated output), optionally: file paths or function names if known. Verify all cited file paths and line numbers—agents can hallucinate locations. Use engineering judgment to validate reasoning—LLMs complete patterns, not logic. [Adapt pattern for other diagnostics](/docs/practical-techniques/lesson-10-debugging#the-closed-loop-debugging-workflow): performance issues (metrics/thresholds/profiling data), security vulnerabilities (attack vectors/boundaries/configuration gaps), deployment failures (infrastructure logs/expected vs actual state), integration issues (API contracts/data flow/boundary errors).
17+
18+
### Related Lessons
19+
20+
- **[Lesson 4: Prompting 101](/docs/methodology/lesson-4-prompting-101#chain-of-thought-paving-a-clear-path)** - Chain-of-Thought, constraints, structured reasoning
21+
- **[Lesson 5: Grounding](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality)** - Grounding directives, RAG, forcing retrieval
22+
- **[Lesson 7: Planning & Execution](/docs/practical-techniques/lesson-7-planning-execution#require-evidence-to-force-grounding)** - Evidence requirements, challenging agent logic
23+
- **[Lesson 10: Debugging](/docs/practical-techniques/lesson-10-debugging#the-closed-loop-debugging-workflow)** - Closed-loop workflow, reproduction scripts, evidence-based approach

website/prompts/index.md

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
---
2+
title: Prompt Library
3+
sidebar_position: 0
4+
---
5+
6+
# Prompt Library
7+
8+
Production-ready prompt templates for common SDLC workflows.
9+
10+
## Quick Reference
11+
12+
| SDLC Phase | Prompt | When to Use |
13+
| ------------------ | ---------------------------------------------------------------------- | ------------------------------------------------ |
14+
| **Onboarding** | [Generate AGENTS.md](/prompts/onboarding/generate-agents-md) | Bootstrap project context files automatically |
15+
| **Planning** | [Edge Case Discovery](/prompts/testing/edge-case-discovery) | Before implementing features or writing tests |
16+
| **Implementation** | [Evidence-Based Debug](/prompts/debugging/evidence-based-debug) | When debugging issues or unexpected behavior |
17+
| **Testing** | [Test Failure Diagnosis](/prompts/testing/test-failure-diagnosis) | When tests fail during development or CI/CD |
18+
| **Review** | [Comprehensive Code Review](/prompts/code-review/comprehensive-review) | Before committing changes or submitting PRs |
19+
| **Pull Requests** | [Dual-Optimized PR](/prompts/pull-requests/dual-optimized-pr) | When creating pull requests for review |
20+
| **Pull Requests** | [AI-Assisted PR Review](/prompts/pull-requests/ai-assisted-review) | When reviewing pull requests submitted by others |
21+
22+
## Usage
23+
24+
Adapt placeholders and domain-specific examples to your context. Verify agent output with evidence.
25+
26+
## Contributing
27+
28+
These prompts evolve based on real-world usage. Contributions for improvements or domain-specific adaptations are welcome.
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
---
2+
title: Generate AGENTS.md
3+
sidebar_position: 1
4+
---
5+
6+
import GenerateAgentsMD from '@site/shared-prompts/\_generate-agents-md.mdx';
7+
8+
<GenerateAgentsMD />
9+
10+
### Overview
11+
12+
**Why multi-source grounding works:** [ChunkHound](/docs/methodology/lesson-5-grounding#code-grounding-choosing-tools-by-scale) provides codebase-specific context (patterns, conventions, architecture) while [ArguSeek](/docs/methodology/lesson-5-grounding#arguseek-isolated-context--state) provides current ecosystem knowledge (framework best practices, security guidelines)—this implements [multi-source grounding](/docs/methodology/lesson-5-grounding#production-pattern-multi-source-grounding) to combine empirical project reality with ecosystem best practices. The [structured output format](/docs/methodology/lesson-4-prompting-101#applying-structure-to-prompts) with explicit sections ensures comprehensive coverage by forcing systematic enumeration instead of free-form narrative. The ≤500 line [conciseness constraint](/docs/methodology/lesson-4-prompting-101#constraints-as-guardrails) forces prioritization—without it, agents generate verbose documentation that gets ignored during actual use. The non-duplication directive keeps focus on AI-specific operational details agents can't easily infer from code alone (environment setup, non-interactive command modifications, deployment gotchas). This implements the [Research phase](/docs/methodology/lesson-3-high-level-methodology#phase-1-research-grounding) of the [four-phase workflow](/docs/methodology/lesson-3-high-level-methodology#the-four-phase-workflow), letting agents build their own foundation before tackling implementation tasks.
13+
14+
**When to use this pattern:** New project onboarding (establish baseline context before first implementation task), documenting legacy projects (capture tribal knowledge systematically), refreshing context after architectural changes (re-run after migrations, framework upgrades, major refactors). Run early in project adoption to establish baseline [context files](/docs/practical-techniques/lesson-6-project-onboarding#the-context-file-ecosystem), re-run after major changes, then manually add tribal knowledge (production incidents, team conventions, non-obvious gotchas) that AI can't discover from code. Without initial context grounding, agents hallucinate conventions based on training patterns instead of reading your actual codebase—this manifests as style violations, incorrect assumptions about architecture, and ignored project-specific constraints.
15+
16+
**Prerequisites:** [ChunkHound code research](https://chunkhound.github.io/) (deep codebase exploration via multi-hop semantic search, query expansion, and iterative follow-ups), [ArguSeek web research](https://github.com/ArguSeek/arguseek) (ecosystem documentation and current best practices), write access to project root. Requires existing codebase with source files and README/basic documentation to avoid duplication. After generation, [validate by testing with a typical task](/docs/methodology/lesson-3-high-level-methodology#phase-4-validate-the-iteration-decision)—if the agent doesn't follow documented conventions, the context file needs iteration. Without validation, you risk cementing incorrect assumptions into project context that compound across future tasks.
17+
18+
### Related Lessons
19+
20+
- **[Lesson 3: High-Level Methodology](/docs/methodology/lesson-3-high-level-methodology#the-four-phase-workflow)** - Four-phase workflow (Research > Plan > Execute > Validate), iteration decisions
21+
- **[Lesson 4: Prompting 101](/docs/methodology/lesson-4-prompting-101#applying-structure-to-prompts)** - Structured prompting, constraints as guardrails, information density
22+
- **[Lesson 5: Grounding](/docs/methodology/lesson-5-grounding#production-pattern-multi-source-grounding)** - Multi-source grounding (ChunkHound + ArguSeek), semantic search, sub-agents
23+
- **[Lesson 6: Project Onboarding](/docs/practical-techniques/lesson-6-project-onboarding#the-context-file-ecosystem)** - Context files (AGENTS.md, CLAUDE.md), hierarchical context, tribal knowledge

0 commit comments

Comments
 (0)