You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: website/docs/practical-techniques/lesson-9-reviewing-code.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -110,7 +110,7 @@ This prompt demonstrates multiple techniques from [Lesson 4 (Prompting 101)](../
110
110
111
111
The instruction "Using the sub task tool to conserve context" spawns a separate agent for git history exploration, preventing the main orchestrator's context from filling with commit diffs. The sub-agent returns only synthesized findings. Without this, exploring 20-30 changed files consumes 40K+ tokens, pushing critical constraints into the U-shaped attention curve's ignored middle.
112
112
113
-
This sub-agent capability is unique to Claude Code CLI. Other tools (Codex, GitHub Copilot) require splitting this into multiple sequential prompts: explore first, then draft based on findings.
113
+
This sub-agent capability is unique to [Claude Code CLI](/developer-tools/cli-coding-agents#claude-code). Other tools (Codex, GitHub Copilot) require splitting this into multiple sequential prompts: explore first, then draft based on findings.
114
114
115
115
**Multi-source grounding ([Lesson 5](../methodology/lesson-5-grounding.md#production-pattern-multi-source-grounding)):** ArguSeek researches PR best practices while ChunkHound grounds descriptions in your actual codebase architecture and coding style.
116
116
@@ -119,7 +119,7 @@ This sub-agent capability is unique to Claude Code CLI. Other tools (Codex, GitH
119
119
**Evidence requirements ([Lesson 7](./lesson-7-planning-execution.md#require-evidence-to-force-grounding)):** The prompt forces grounding through "explore the changes" and "learn the architecture"—the agent cannot draft accurate descriptions without reading actual commits and code.
120
120
121
121
:::tip Reference
122
-
See the complete prompt template with workflow integration tips: [Dual-Optimized PR Description](/prompts/pull-requests/dual-optimized-pr)
122
+
See the complete prompt template with workflow integration tips: [PR Description Generator](/prompts/pull-requests/dual-optimized-pr)
Copy file name to clipboardExpand all lines: website/prompts/pull-requests/ai-assisted-review.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,11 +9,11 @@ import AIAssistedReview from '@site/shared-prompts/\_ai-assisted-review.mdx';
9
9
10
10
### Overview
11
11
12
-
**Why GitHub CLI integration prevents hallucination:** [Persona directive](/docs/methodology/lesson-4-prompting-101#assigning-personas) ("$PROJECT_NAME's maintainer") biases toward critical analysis and architectural awareness instead of generic approval patterns—reviewers defend system integrity, not PR authors. GitHub CLI integration (`gh pr view --comments`) loads PR metadata, discussion threads, and linked issues directly into context, providing [multi-source grounding](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality) beyond just code diff—identifies author intent through comments, catches breaking changes from issue discussions, surfaces previously raised concerns. Pasting AI-optimized description from [Dual-Optimized PR](/prompts/pull-requests/dual-optimized-pr) provides technical grounding (file paths, module responsibilities, breaking changes, implementation rationale) that contextualizes changes within project architecture. Evidence requirement ("never speculate...investigate files") implements explicit [grounding directive](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality)—forces code research tool usage before making claims, preventing hallucinated issues based on training patterns ("probably violates SOLID" → must cite specific violations with file:line evidence). [Chain of Draft (CoD)](/docs/practical-techniques/lesson-9-reviewing-code#mechanisms-at-work) maintains [structured reasoning](/docs/methodology/lesson-4-prompting-101#chain-of-thought-paving-a-clear-path) like Chain-of-Thought but with concise intermediate steps (5 words max per draft), reducing token consumption 60-80% while preserving reasoning quality—agent still thinks through architecture/quality/maintainability/reusability sequentially, just doesn't generate verbose explanations until final assessment after `####` separator. Critical Checks framework provides concrete [constraints](/docs/methodology/lesson-4-prompting-101#constraints-as-guardrails) for architectural validation: "Can existing code be extended?" forces reusability analysis, "Search the codebase for similar patterns" requires semantic search instead of assumptions, "Is this introducing duplication?" prevents incremental technical debt. Structured output format ([constrained format](/docs/methodology/lesson-4-prompting-101#constraints-as-guardrails)) with severity classification (Critical/Major/Minor) and file:line references ensures [evidence requirements](/docs/practical-techniques/lesson-7-planning-execution#require-evidence-to-force-grounding)—can't list issues without citing specific locations and code.
12
+
**Why two-step workflow with human validation:** Step 1 generates structured analysis using [Chain of Draft (CoD)](/docs/practical-techniques/lesson-9-reviewing-code#mechanisms-at-work) reasoning—5 words max per thinking step, reducing token consumption 60-80% while preserving reasoning quality. GitHub CLI integration (`gh pr view --comments`) provides [multi-source grounding](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality) beyond code diff: PR metadata, discussion threads, linked issues, author intent. The [persona directive](/docs/methodology/lesson-4-prompting-101#assigning-personas) ("maintainer") biases toward critical analysis rather than generic approval. Evidence requirement ("never speculate...investigate files") forces code research before claims, preventing hallucinated issues. **Between steps, you validate findings**—LLMs can be confidently wrong about architectural violations. Cross-check file:line references, challenge vague criticisms. Step 2 then transforms validated analysis into dual-audience output: HUMAN_REVIEW.md (concise, actionable) and AGENT_REVIEW.md (efficient technical context). This mirrors the [PR Description Generator](/prompts/pull-requests/dual-optimized-pr) pattern—same context continuation, not fresh analysis.
13
13
14
-
**When to use—primary use cases:** Systematic PR review for architectural changes touching core modules, introducing new patterns, or significant refactoring where architectural consistency matters more than surface-level correctness. Most effective when PR author provides AI-optimized description (explicit file paths, module boundaries, breaking changes), though you can compensate with additional code research time if human-only description exists. Best used pre-merge as final validation in [Validate phase](/docs/methodology/lesson-3-high-level-methodology#the-four-phase-workflow), not during active development (use [Comprehensive Review](/prompts/code-review/comprehensive-review) for worktree changes). Critical: verify every issue the agent raises—LLMs can be confident and wrong, especially about architectural violations. Cross-check cited file:line references actually contain the code claimed. Challenge vague criticisms ("violates separation of concerns") by demanding specific evidence: which modules are coupled, what responsibilities are mixed, show exact lines. Use this pattern when human reviewers need LLM assistance for deep codebase searches (finding duplication, pattern conformance, similar implementations elsewhere)—GitHub CLI + code research combination excels at multi-file analysis humans find tedious. Less effective for trivial PRs (documentation-only, dependency updates, simple bug fixes) where review overhead exceeds value.
14
+
**When to use—primary use cases:** Systematic PR review for architectural changes touching core modules, introducing new patterns, or significant refactoring. Most effective with AI-optimized description from [PR Description Generator](/prompts/pull-requests/dual-optimized-pr) (explicit file paths, module boundaries, breaking changes). Best used pre-merge as final validation in [Validate phase](/docs/methodology/lesson-3-high-level-methodology#the-four-phase-workflow), not during active development (use [Comprehensive Review](/prompts/code-review/comprehensive-review) for worktree changes). The dual output files enable: HUMAN_REVIEW.md for maintainer discussion (1-3 paragraphs, praise + actionable focus), AGENT_REVIEW.md for downstream AI tools that may process the review. Less effective for trivial PRs (documentation-only, dependency updates, simple bug fixes) where review overhead exceeds value.
15
15
16
-
**Prerequisites:** [GitHub CLI](https://cli.github.com/) (`gh`) installed, authenticated, and configured for target repository (run `gh auth status` to verify), [code research capabilities](https://chunkhound.github.io/) (semantic search across codebase for finding patterns, duplication, architectural violations), repository access with read permissions for all relevant files. Requires PR link (URL or number like `#123`), project name for persona context (`$PROJECT_NAME`), ideally AI-optimized description from [Dual-Optimized PR](/prompts/pull-requests/dual-optimized-pr) workflow (if unavailable, agent compensates with longer code research phase). Agent produces structured feedback with severity-classified issues, file:line citations, specific refactoring opportunities, and APPROVE/REQUEST CHANGES/REJECT decision. [Adapt Critical Checks for specialized review focus](/docs/practical-techniques/lesson-9-reviewing-code#mechanisms-at-work): security reviews (input validation boundaries—are user inputs sanitized? authentication/authorization checks—who can access this endpoint? credential handling—are secrets exposed? injection attack vectors—can SQL/XSS occur? OWASP Top 10 coverage), performance reviews (algorithmic complexity—is this O(n²) unavoidable? database query efficiency—N+1 queries or missing indexes? memory allocation patterns—unnecessary copies or leaks? I/O operations—blocking calls in hot paths? caching strategy—are expensive operations memoized?), accessibility reviews (semantic HTML structure—meaningful tags or div soup? keyboard navigation—can users tab through? ARIA labels—screen reader compatible? focus management—obvious visual indicators? color contrast ratios—WCAG AA/AAA compliance?).
16
+
**Prerequisites:**[GitHub CLI](https://cli.github.com/) (`gh`) installed and authenticated (`gh auth status`), [ChunkHound](https://chunkhound.github.io/) for codebase semantic search, [ArguSeek](https://github.com/ofrivera/ArguSeek) for learning human/LLM optimization patterns in Step 2. Requires PR link (URL or number), AI-optimized description from [PR Description Generator](/prompts/pull-requests/dual-optimized-pr) workflow. Outputs: Step 1 produces structured verdict (Summary/Strengths/Issues/Decision), Step 2 produces HUMAN_REVIEW.md and AGENT_REVIEW.md files. [Adapt Critical Checks for specialized focus](/docs/practical-techniques/lesson-9-reviewing-code#mechanisms-at-work): security (input validation, auth checks, credential handling, injection vectors), performance (complexity, N+1 queries, memory patterns, caching), accessibility (semantic HTML, keyboard nav, ARIA, contrast).
Copy file name to clipboardExpand all lines: website/prompts/pull-requests/dual-optimized-pr.md
+7-2Lines changed: 7 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Dual-Optimized PR Description
2
+
title: PR Description Generator
3
3
sidebar_position: 1
4
4
---
5
5
@@ -13,7 +13,12 @@ import DualOptimizedPR from '@site/shared-prompts/\_dual-optimized-pr.mdx';
13
13
14
14
**When to use—workflow integration:** Before submitting PRs with complex changesets (10+ files, multiple modules touched, cross-cutting concerns) or cross-team collaboration where reviewers lack deep module familiarity. Integrate into [four-phase workflow](/docs/methodology/lesson-3-high-level-methodology#the-four-phase-workflow): complete implementation → validate with tests → self-review for issues → fix discovered problems → generate dual descriptions → submit PR with both files. Be specific with `$CHANGES_DESC`—vague descriptions ("fix bugs", "update logic") produce generic output because [grounding](/docs/methodology/lesson-5-grounding#grounding-anchoring-agents-in-reality) requires concrete intent. Without specific change description, agent has no anchor to evaluate "what matters" in the git diff. Critical: if you manually edit generated descriptions post-generation, regenerate BOTH files—stale context in AI-optimized description causes [hallucinations during review](/docs/practical-techniques/lesson-9-reviewing-code) when architectural explanations contradict actual changes. For teams without AI reviewers yet, human-optimized output alone provides concise summaries that respect reviewer time.
15
15
16
-
**Prerequisites:** [Sub-agent/task tool](/docs/understanding-the-tools/lesson-2-understanding-agents#the-stateless-advantage) (Claude Code CLI provides built-in Task tool—other platforms require manual context management via sequential prompts), [ArguSeek](https://github.com/ofrivera/ArguSeek) (web research for PR best practices), [ChunkHound](https://chunkhound.github.io/) (codebase research via multi-hop semantic search and iterative exploration), git history access with committed changes on feature branch. Requires base branch for comparison (typically `main` or `develop`), architecture documentation ([CLAUDE.md project context](/docs/practical-techniques/lesson-6-project-onboarding#the-context-file-ecosystem), AGENTS.md for agentic workflows). Agent generates two markdown files: **human-optimized** (1-3 paragraphs covering what changed, why it matters, breaking changes if any, value delivered) and **AI-optimized** (file paths with line numbers, module responsibilities, architectural patterns followed, boundary changes, testing coverage, edge cases addressed). [Adapt this pattern](/docs/practical-techniques/lesson-7-planning-execution) for other documentation needs: release notes (user-facing features vs technical changelog), incident postmortems (executive summary vs technical root cause analysis), design docs (stakeholder overview vs implementation deep-dive). See **[AI-Assisted PR Review](/prompts/pull-requests/ai-assisted-review)** for consuming these descriptions during review workflow.
16
+
**Prerequisites:** [Sub-agent/task tool](/docs/methodology/lesson-5-grounding#solution-2-sub-agents-for-context-isolation) ([Claude Code CLI](/developer-tools/cli-coding-agents#claude-code) provides built-in Task tool—other platforms require manual context management via sequential prompts), [ArguSeek](https://github.com/ofrivera/ArguSeek) (web research for PR best practices), [ChunkHound](https://chunkhound.github.io/) (codebase research via multi-hop semantic search and iterative exploration), git history access with committed changes on feature branch. Requires base branch for comparison (typically `main` or `develop`), architecture documentation ([CLAUDE.md project context](/docs/practical-techniques/lesson-6-project-onboarding#the-context-file-ecosystem), AGENTS.md for agentic workflows). Agent generates two markdown files: **human-optimized** (1-3 paragraphs covering what changed, why it matters, breaking changes if any, value delivered) and **AI-optimized** (file paths with line numbers, module responsibilities, architectural patterns followed, boundary changes, testing coverage, edge cases addressed). [Adapt this pattern](/docs/practical-techniques/lesson-7-planning-execution) for other documentation needs: release notes (user-facing features vs technical changelog), incident postmortems (executive summary vs technical root cause analysis), design docs (stakeholder overview vs implementation deep-dive).
17
+
18
+
**For actual review**, use these prompts with the generated artifacts:
19
+
20
+
-**[AI-Assisted PR Review](/prompts/pull-requests/ai-assisted-review)** — Review PRs using the AI-optimized description with GitHub CLI integration
0 commit comments