Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .agent/rules/end-to-end-tests/end-to-end-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,18 @@ These rules outline the structure, patterns, and best practices for writing end-

## Implementation

1. Use the **e2e MCP tool** to run end-to-end tests with these options:
1. Use the **end-to-end MCP tool** to run end-to-end tests with these options:
- Test filtering: smoke tests only, specific browser, search terms
- Change scoping: last failed tests, only changed tests
- Flaky test detection: repeat tests, retry on failure, stop on first failure
- Performance: debug timing to see step execution times
- **Note**: The **e2e MCP tool** always runs with quiet mode automatically
- **Note**: The **end-to-end MCP tool** always runs with quiet mode automatically

2. Test Search and Filtering:
- Search by test tags: smoke, comprehensive
- Search by test content: find tests containing specific text
- Search by filename: find specific test files
- Multiple search terms: `e2e(searchTerms=["user", "management"])`
- Multiple search terms: `end-to-end(searchTerms=["user", "management"])`
- The tool automatically detects which self-contained systems contain matching tests and only runs those

3. Test-Driven Debugging Process:
Expand Down
6 changes: 3 additions & 3 deletions .agent/workflows/process/implement-end-to-end-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,9 +127,9 @@ Research the codebase to find similar E2E test implementations. Look for existin

**STEP 7**: Run tests and verify they pass

- Use **e2e MCP tool** to run your tests
- Start with smoke tests: `e2e(smoke=true)`
- Then run comprehensive tests with search terms: `e2e(searchTerms=["feature-name"])`
- Use **end-to-end MCP tool** to run your tests
- Start with smoke tests: `end-to-end(smoke=true)`
- Then run comprehensive tests with search terms: `end-to-end(searchTerms=["feature-name"])`
- All tests must pass before proceeding
- If tests fail: Fix them and run again (don't proceed with failing tests)

Expand Down
4 changes: 2 additions & 2 deletions .agent/workflows/process/review-end-to-end-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ You are reviewing: **{{{title}}}**
- The tool starts .NET Aspire at https://localhost:9000

**Run E2E tests**:
- Use **e2e MCP tool** to run tests: `e2e(searchTerms=["feature-name"])`
- Use **end-to-end MCP tool** to run tests: `end-to-end(searchTerms=["feature-name"])`
- **ALL tests MUST pass with ZERO failures to approve**
- **Verify ZERO console errors** during test execution
- **Verify ZERO network errors** (no unexpected 4xx/5xx responses)
Expand Down Expand Up @@ -180,7 +180,7 @@ Don't use `git add -A` or `git add .`
- `[requirements]` — Requirements clarity, acceptance criteria, task description
- `[code]` — Code patterns, rules, architecture guidance

Examples: `[system] e2e MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`
Examples: `[system] end-to-end MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`

⚠️ Your session terminates IMMEDIATELY after calling CompleteWork.

Expand Down
29 changes: 7 additions & 22 deletions .agent/workflows/process/review-task.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie

**Collaborate with your team**: For complex problems or design questions, engage in conversation with engineers or other reviewers. Better solutions often emerge from team collaboration.

**STEP 3**: Run validation tools in parallel (format, test, inspect)
**STEP 3**: Run validation tools

**Zero tolerance for issues**:
- We deploy to production after review - quality is non-negotiable.
Expand All @@ -164,18 +164,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie

**For backend-reviewer** (validates all self-contained systems to catch cross-self-contained-system breakage):

1. Run **build** first for all self-contained systems (backend AND frontend):
- Use execute_command MCP tool: `command: "build"`.
- DO NOT run in parallel.
1. Run **build**, **format**, **test**, **inspect** following the global tool execution instructions.

2. Run **format**, **test**, **inspect** in parallel:
- Call all three MCP tools in a single message:
- `execute_command(command: "format", noBuild: true)`
- `execute_command(command: "test", noBuild: true)`
- `execute_command(command: "inspect", noBuild: true)`
- All three run simultaneously and return together.

3. Handle validation results:
2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Frontend-engineer..."):
- REJECT if backend failures found (Core/, Api/, Tests/, Database/).
Expand All @@ -184,13 +175,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie

**For frontend-reviewer** (validates frontend only):

1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.

2. Run **format** for all self-contained systems: `execute_command(command: "format", frontend: true)`.
1. Run **build**, **format**, **inspect** for frontend following the global tool execution instructions.

3. Run **inspect** for all self-contained systems: `execute_command(command: "inspect", frontend: true)`.

4. Handle validation results:
2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Backend-engineer..."):
- REJECT if frontend failures found (WebApp/).
Expand All @@ -199,11 +186,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie

**For qa-reviewer** (validates E2E tests):

1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.

2. Run **e2e** tests (run in background, monitor output).
1. Run **build** for frontend, then run **end-to-end** tests following the global tool execution instructions.

3. REJECT if ANY failures found (zero tolerance).
2. REJECT if ANY failures found (zero tolerance).

**If validation fails with errors unrelated to engineer's changes**:
- Check `git log --oneline` for recent parallel engineer commits.
Expand Down
2 changes: 1 addition & 1 deletion .claude/agentic-workflow/system-prompts/qa-engineer.txt
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Push back on reviewer suggestions with evidence when you have more context about

You always follow your proven systematic workflow that ensures proper rule adherence and quality test implementation.

**Important**: You must run tests and verify they pass before completing. Use watch MCP tool to apply migrations, then e2e MCP tool to run tests. Never complete without passing tests.
**Important**: You must run tests and verify they pass before completing. Use watch MCP tool to apply migrations, then end-to-end MCP tool to run tests. Never complete without passing tests.

⚠️ Problem reporting - report aggressively:

Expand Down
2 changes: 1 addition & 1 deletion .claude/agentic-workflow/system-prompts/qa-reviewer.txt
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ You understand the review cycle and must complete a review every time you receiv

Never assume work is done just because you reviewed it before. Always re-evaluate the latest code and call CompleteWork.

**Important**: Never approve tests without running them. Use watch MCP tool to apply migrations, then e2e MCP tool to run tests. All tests must pass to approve. Reject if any sleep statements found.
**Important**: Never approve tests without running them. Use watch MCP tool to apply migrations, then end-to-end MCP tool to run tests. All tests must pass to approve. Reject if any sleep statements found.

⚠️ Problem reporting - report aggressively:

Expand Down
6 changes: 3 additions & 3 deletions .claude/commands/process/implement-end-to-end-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,9 +132,9 @@ Research the codebase to find similar E2E test implementations. Look for existin

**STEP 7**: Run tests and verify they pass

- Use **e2e MCP tool** to run your tests
- Start with smoke tests: `e2e(smoke=true)`
- Then run comprehensive tests with search terms: `e2e(searchTerms=["feature-name"])`
- Use **end-to-end MCP tool** to run your tests
- Start with smoke tests: `end-to-end(smoke=true)`
- Then run comprehensive tests with search terms: `end-to-end(searchTerms=["feature-name"])`
- All tests must pass before proceeding
- If tests fail: Fix them and run again (don't proceed with failing tests)

Expand Down
4 changes: 2 additions & 2 deletions .claude/commands/process/review-end-to-end-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ You are reviewing: **{{{title}}}**
- The tool starts .NET Aspire at https://localhost:9000

**Run E2E tests**:
- Use **e2e MCP tool** to run tests: `e2e(searchTerms=["feature-name"])`
- Use **end-to-end MCP tool** to run tests: `end-to-end(searchTerms=["feature-name"])`
- **ALL tests MUST pass with ZERO failures to approve**
- **Verify ZERO console errors** during test execution
- **Verify ZERO network errors** (no unexpected 4xx/5xx responses)
Expand Down Expand Up @@ -185,7 +185,7 @@ Don't use `git add -A` or `git add .`
- `[requirements]` — Requirements clarity, acceptance criteria, task description
- `[code]` — Code patterns, rules, architecture guidance

Examples: `[system] e2e MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`
Examples: `[system] end-to-end MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`

⚠️ Your session terminates IMMEDIATELY after calling CompleteWork.

Expand Down
29 changes: 7 additions & 22 deletions .claude/commands/process/review-task.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie

**Collaborate with your team**: For complex problems or design questions, engage in conversation with engineers or other reviewers. Better solutions often emerge from team collaboration.

**STEP 3**: Run validation tools in parallel (format, test, inspect)
**STEP 3**: Run validation tools

**Zero tolerance for issues**:
- We deploy to production after review - quality is non-negotiable.
Expand All @@ -169,18 +169,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie

**For backend-reviewer** (validates all self-contained systems to catch cross-self-contained-system breakage):

1. Run **build** first for all self-contained systems (backend AND frontend):
- Use execute_command MCP tool: `command: "build"`.
- DO NOT run in parallel.
1. Run **build**, **format**, **test**, **inspect** following the global tool execution instructions.

2. Run **format**, **test**, **inspect** in parallel:
- Call all three MCP tools in a single message:
- `execute_command(command: "format", noBuild: true)`
- `execute_command(command: "test", noBuild: true)`
- `execute_command(command: "inspect", noBuild: true)`
- All three run simultaneously and return together.

3. Handle validation results:
2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Frontend-engineer..."):
- REJECT if backend failures found (Core/, Api/, Tests/, Database/).
Expand All @@ -189,13 +180,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie

**For frontend-reviewer** (validates frontend only):

1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.

2. Run **format** for all self-contained systems: `execute_command(command: "format", frontend: true)`.
1. Run **build**, **format**, **inspect** for frontend following the global tool execution instructions.

3. Run **inspect** for all self-contained systems: `execute_command(command: "inspect", frontend: true)`.

4. Handle validation results:
2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Backend-engineer..."):
- REJECT if frontend failures found (WebApp/).
Expand All @@ -204,11 +191,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie

**For qa-reviewer** (validates E2E tests):

1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.

2. Run **e2e** tests (run in background, monitor output).
1. Run **build** for frontend, then run **end-to-end** tests following the global tool execution instructions.

3. REJECT if ANY failures found (zero tolerance).
2. REJECT if ANY failures found (zero tolerance).

**If validation fails with errors unrelated to engineer's changes**:
- Check `git log --oneline` for recent parallel engineer commits.
Expand Down
2 changes: 1 addition & 1 deletion .claude/hooks/pre-tool-use-bash.sh
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ case "$cmd" in
*"npm run format"*) echo "❌ Use **format MCP tool** instead" >&2; exit 2 ;;
*"npm test"*) echo "❌ Use **test MCP tool** instead" >&2; exit 2 ;;
*"npm run build"*) echo "❌ Use **build MCP tool** instead" >&2; exit 2 ;;
*"npx playwright test"*) echo "❌ Use **e2e MCP tool** instead" >&2; exit 2 ;;
*"npx playwright test"*) echo "❌ Use **end-to-end MCP tool** instead" >&2; exit 2 ;;
*"docker"*) echo "❌ Docker not allowed. Use **watch MCP tool** for Aspire/migrations" >&2; exit 2 ;;
*) exit 0 ;;
esac
6 changes: 3 additions & 3 deletions .claude/rules/end-to-end-tests/end-to-end-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,18 @@ These rules outline the structure, patterns, and best practices for writing end-

## Implementation

1. Use the **e2e MCP tool** to run end-to-end tests with these options:
1. Use the **end-to-end MCP tool** to run end-to-end tests with these options:
- Test filtering: smoke tests only, specific browser, search terms
- Change scoping: last failed tests, only changed tests
- Flaky test detection: repeat tests, retry on failure, stop on first failure
- Performance: debug timing to see step execution times
- **Note**: The **e2e MCP tool** always runs with quiet mode automatically
- **Note**: The **end-to-end MCP tool** always runs with quiet mode automatically

2. Test Search and Filtering:
- Search by test tags: smoke, comprehensive
- Search by test content: find tests containing specific text
- Search by filename: find specific test files
- Multiple search terms: `e2e(searchTerms=["user", "management"])`
- Multiple search terms: `end-to-end(searchTerms=["user", "management"])`
- The tool automatically detects which self-contained systems contain matching tests and only runs those

3. Test-Driven Debugging Process:
Expand Down
6 changes: 3 additions & 3 deletions .cursor/rules/end-to-end-tests/end-to-end-tests.mdc
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,18 @@ These rules outline the structure, patterns, and best practices for writing end-

## Implementation

1. Use the **e2e MCP tool** to run end-to-end tests with these options:
1. Use the **end-to-end MCP tool** to run end-to-end tests with these options:
- Test filtering: smoke tests only, specific browser, search terms
- Change scoping: last failed tests, only changed tests
- Flaky test detection: repeat tests, retry on failure, stop on first failure
- Performance: debug timing to see step execution times
- **Note**: The **e2e MCP tool** always runs with quiet mode automatically
- **Note**: The **end-to-end MCP tool** always runs with quiet mode automatically

2. Test Search and Filtering:
- Search by test tags: smoke, comprehensive
- Search by test content: find tests containing specific text
- Search by filename: find specific test files
- Multiple search terms: `e2e(searchTerms=["user", "management"])`
- Multiple search terms: `end-to-end(searchTerms=["user", "management"])`
- The tool automatically detects which self-contained systems contain matching tests and only runs those

3. Test-Driven Debugging Process:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -129,9 +129,9 @@ Research the codebase to find similar E2E test implementations. Look for existin

**STEP 7**: Run tests and verify they pass

- Use **e2e MCP tool** to run your tests
- Start with smoke tests: `e2e(smoke=true)`
- Then run comprehensive tests with search terms: `e2e(searchTerms=["feature-name"])`
- Use **end-to-end MCP tool** to run your tests
- Start with smoke tests: `end-to-end(smoke=true)`
- Then run comprehensive tests with search terms: `end-to-end(searchTerms=["feature-name"])`
- All tests must pass before proceeding
- If tests fail: Fix them and run again (don't proceed with failing tests)

Expand Down
4 changes: 2 additions & 2 deletions .cursor/rules/workflows/process/review-end-to-end-tests.mdc
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ You are reviewing: **{{{title}}}**
- The tool starts .NET Aspire at https://localhost:9000

**Run E2E tests**:
- Use **e2e MCP tool** to run tests: `e2e(searchTerms=["feature-name"])`
- Use **end-to-end MCP tool** to run tests: `end-to-end(searchTerms=["feature-name"])`
- **ALL tests MUST pass with ZERO failures to approve**
- **Verify ZERO console errors** during test execution
- **Verify ZERO network errors** (no unexpected 4xx/5xx responses)
Expand Down Expand Up @@ -182,7 +182,7 @@ Don't use `git add -A` or `git add .`
- `[requirements]` — Requirements clarity, acceptance criteria, task description
- `[code]` — Code patterns, rules, architecture guidance

Examples: `[system] e2e MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`
Examples: `[system] end-to-end MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`

⚠️ Your session terminates IMMEDIATELY after calling CompleteWork.

Expand Down
29 changes: 7 additions & 22 deletions .cursor/rules/workflows/process/review-task.mdc
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie

**Collaborate with your team**: For complex problems or design questions, engage in conversation with engineers or other reviewers. Better solutions often emerge from team collaboration.

**STEP 3**: Run validation tools in parallel (format, test, inspect)
**STEP 3**: Run validation tools

**Zero tolerance for issues**:
- We deploy to production after review - quality is non-negotiable.
Expand All @@ -166,18 +166,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie

**For backend-reviewer** (validates all self-contained systems to catch cross-self-contained-system breakage):

1. Run **build** first for all self-contained systems (backend AND frontend):
- Use execute_command MCP tool: `command: "build"`.
- DO NOT run in parallel.
1. Run **build**, **format**, **test**, **inspect** following the global tool execution instructions.

2. Run **format**, **test**, **inspect** in parallel:
- Call all three MCP tools in a single message:
- `execute_command(command: "format", noBuild: true)`
- `execute_command(command: "test", noBuild: true)`
- `execute_command(command: "inspect", noBuild: true)`
- All three run simultaneously and return together.

3. Handle validation results:
2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Frontend-engineer..."):
- REJECT if backend failures found (Core/, Api/, Tests/, Database/).
Expand All @@ -186,13 +177,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie

**For frontend-reviewer** (validates frontend only):

1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.

2. Run **format** for all self-contained systems: `execute_command(command: "format", frontend: true)`.
1. Run **build**, **format**, **inspect** for frontend following the global tool execution instructions.

3. Run **inspect** for all self-contained systems: `execute_command(command: "inspect", frontend: true)`.

4. Handle validation results:
2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Backend-engineer..."):
- REJECT if frontend failures found (WebApp/).
Expand All @@ -201,11 +188,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie

**For qa-reviewer** (validates E2E tests):

1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.

2. Run **e2e** tests (run in background, monitor output).
1. Run **build** for frontend, then run **end-to-end** tests following the global tool execution instructions.

3. REJECT if ANY failures found (zero tolerance).
2. REJECT if ANY failures found (zero tolerance).

**If validation fails with errors unrelated to engineer's changes**:
- Check `git log --oneline` for recent parallel engineer commits.
Expand Down
Loading
Loading