diff --git a/.agent/rules/end-to-end-tests/end-to-end-tests.md b/.agent/rules/end-to-end-tests/end-to-end-tests.md
index db1bd1fe6..f230c165a 100644
--- a/.agent/rules/end-to-end-tests/end-to-end-tests.md
+++ b/.agent/rules/end-to-end-tests/end-to-end-tests.md
@@ -9,18 +9,18 @@ These rules outline the structure, patterns, and best practices for writing end-
## Implementation
-1. Use the **e2e MCP tool** to run end-to-end tests with these options:
+1. Use the **end-to-end MCP tool** to run end-to-end tests with these options:
- Test filtering: smoke tests only, specific browser, search terms
- Change scoping: last failed tests, only changed tests
- Flaky test detection: repeat tests, retry on failure, stop on first failure
- Performance: debug timing to see step execution times
- - **Note**: The **e2e MCP tool** always runs with quiet mode automatically
+ - **Note**: The **end-to-end MCP tool** always runs with quiet mode automatically
2. Test Search and Filtering:
- Search by test tags: smoke, comprehensive
- Search by test content: find tests containing specific text
- Search by filename: find specific test files
- - Multiple search terms: `e2e(searchTerms=["user", "management"])`
+ - Multiple search terms: `end-to-end(searchTerms=["user", "management"])`
- The tool automatically detects which self-contained systems contain matching tests and only runs those
3. Test-Driven Debugging Process:
diff --git a/.agent/workflows/process/implement-end-to-end-tests.md b/.agent/workflows/process/implement-end-to-end-tests.md
index c37253f73..b621b50bd 100644
--- a/.agent/workflows/process/implement-end-to-end-tests.md
+++ b/.agent/workflows/process/implement-end-to-end-tests.md
@@ -127,9 +127,9 @@ Research the codebase to find similar E2E test implementations. Look for existin
**STEP 7**: Run tests and verify they pass
-- Use **e2e MCP tool** to run your tests
-- Start with smoke tests: `e2e(smoke=true)`
-- Then run comprehensive tests with search terms: `e2e(searchTerms=["feature-name"])`
+- Use **end-to-end MCP tool** to run your tests
+- Start with smoke tests: `end-to-end(smoke=true)`
+- Then run comprehensive tests with search terms: `end-to-end(searchTerms=["feature-name"])`
- All tests must pass before proceeding
- If tests fail: Fix them and run again (don't proceed with failing tests)
diff --git a/.agent/workflows/process/review-end-to-end-tests.md b/.agent/workflows/process/review-end-to-end-tests.md
index cb27ee54f..77fb68531 100644
--- a/.agent/workflows/process/review-end-to-end-tests.md
+++ b/.agent/workflows/process/review-end-to-end-tests.md
@@ -83,7 +83,7 @@ You are reviewing: **{{{title}}}**
- The tool starts .NET Aspire at https://localhost:9000
**Run E2E tests**:
-- Use **e2e MCP tool** to run tests: `e2e(searchTerms=["feature-name"])`
+- Use **end-to-end MCP tool** to run tests: `end-to-end(searchTerms=["feature-name"])`
- **ALL tests MUST pass with ZERO failures to approve**
- **Verify ZERO console errors** during test execution
- **Verify ZERO network errors** (no unexpected 4xx/5xx responses)
@@ -180,7 +180,7 @@ Don't use `git add -A` or `git add .`
- `[requirements]` — Requirements clarity, acceptance criteria, task description
- `[code]` — Code patterns, rules, architecture guidance
- Examples: `[system] e2e MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`
+ Examples: `[system] end-to-end MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`
⚠️ Your session terminates IMMEDIATELY after calling CompleteWork.
diff --git a/.agent/workflows/process/review-task.md b/.agent/workflows/process/review-task.md
index e804de223..2bad9d994 100644
--- a/.agent/workflows/process/review-task.md
+++ b/.agent/workflows/process/review-task.md
@@ -150,7 +150,7 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**Collaborate with your team**: For complex problems or design questions, engage in conversation with engineers or other reviewers. Better solutions often emerge from team collaboration.
-**STEP 3**: Run validation tools in parallel (format, test, inspect)
+**STEP 3**: Run validation tools
**Zero tolerance for issues**:
- We deploy to production after review - quality is non-negotiable.
@@ -164,18 +164,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For backend-reviewer** (validates all self-contained systems to catch cross-self-contained-system breakage):
-1. Run **build** first for all self-contained systems (backend AND frontend):
- - Use execute_command MCP tool: `command: "build"`.
- - DO NOT run in parallel.
+1. Run **build**, **format**, **test**, **inspect** following the global tool execution instructions.
-2. Run **format**, **test**, **inspect** in parallel:
- - Call all three MCP tools in a single message:
- - `execute_command(command: "format", noBuild: true)`
- - `execute_command(command: "test", noBuild: true)`
- - `execute_command(command: "inspect", noBuild: true)`
- - All three run simultaneously and return together.
-
-3. Handle validation results:
+2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Frontend-engineer..."):
- REJECT if backend failures found (Core/, Api/, Tests/, Database/).
@@ -184,13 +175,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For frontend-reviewer** (validates frontend only):
-1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.
-
-2. Run **format** for all self-contained systems: `execute_command(command: "format", frontend: true)`.
+1. Run **build**, **format**, **inspect** for frontend following the global tool execution instructions.
-3. Run **inspect** for all self-contained systems: `execute_command(command: "inspect", frontend: true)`.
-
-4. Handle validation results:
+2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Backend-engineer..."):
- REJECT if frontend failures found (WebApp/).
@@ -199,11 +186,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For qa-reviewer** (validates E2E tests):
-1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.
-
-2. Run **e2e** tests (run in background, monitor output).
+1. Run **build** for frontend, then run **end-to-end** tests following the global tool execution instructions.
-3. REJECT if ANY failures found (zero tolerance).
+2. REJECT if ANY failures found (zero tolerance).
**If validation fails with errors unrelated to engineer's changes**:
- Check `git log --oneline` for recent parallel engineer commits.
diff --git a/.claude/agentic-workflow/system-prompts/qa-engineer.txt b/.claude/agentic-workflow/system-prompts/qa-engineer.txt
index 57149ae54..0022199c4 100644
--- a/.claude/agentic-workflow/system-prompts/qa-engineer.txt
+++ b/.claude/agentic-workflow/system-prompts/qa-engineer.txt
@@ -46,7 +46,7 @@ Push back on reviewer suggestions with evidence when you have more context about
You always follow your proven systematic workflow that ensures proper rule adherence and quality test implementation.
-**Important**: You must run tests and verify they pass before completing. Use watch MCP tool to apply migrations, then e2e MCP tool to run tests. Never complete without passing tests.
+**Important**: You must run tests and verify they pass before completing. Use watch MCP tool to apply migrations, then end-to-end MCP tool to run tests. Never complete without passing tests.
⚠️ Problem reporting - report aggressively:
diff --git a/.claude/agentic-workflow/system-prompts/qa-reviewer.txt b/.claude/agentic-workflow/system-prompts/qa-reviewer.txt
index ef3afd148..a2d3c2a67 100644
--- a/.claude/agentic-workflow/system-prompts/qa-reviewer.txt
+++ b/.claude/agentic-workflow/system-prompts/qa-reviewer.txt
@@ -42,7 +42,7 @@ You understand the review cycle and must complete a review every time you receiv
Never assume work is done just because you reviewed it before. Always re-evaluate the latest code and call CompleteWork.
-**Important**: Never approve tests without running them. Use watch MCP tool to apply migrations, then e2e MCP tool to run tests. All tests must pass to approve. Reject if any sleep statements found.
+**Important**: Never approve tests without running them. Use watch MCP tool to apply migrations, then end-to-end MCP tool to run tests. All tests must pass to approve. Reject if any sleep statements found.
⚠️ Problem reporting - report aggressively:
diff --git a/.claude/commands/process/implement-end-to-end-tests.md b/.claude/commands/process/implement-end-to-end-tests.md
index 491ed7ce6..59a128b83 100644
--- a/.claude/commands/process/implement-end-to-end-tests.md
+++ b/.claude/commands/process/implement-end-to-end-tests.md
@@ -132,9 +132,9 @@ Research the codebase to find similar E2E test implementations. Look for existin
**STEP 7**: Run tests and verify they pass
-- Use **e2e MCP tool** to run your tests
-- Start with smoke tests: `e2e(smoke=true)`
-- Then run comprehensive tests with search terms: `e2e(searchTerms=["feature-name"])`
+- Use **end-to-end MCP tool** to run your tests
+- Start with smoke tests: `end-to-end(smoke=true)`
+- Then run comprehensive tests with search terms: `end-to-end(searchTerms=["feature-name"])`
- All tests must pass before proceeding
- If tests fail: Fix them and run again (don't proceed with failing tests)
diff --git a/.claude/commands/process/review-end-to-end-tests.md b/.claude/commands/process/review-end-to-end-tests.md
index 5adbd60dc..4f90b3918 100644
--- a/.claude/commands/process/review-end-to-end-tests.md
+++ b/.claude/commands/process/review-end-to-end-tests.md
@@ -88,7 +88,7 @@ You are reviewing: **{{{title}}}**
- The tool starts .NET Aspire at https://localhost:9000
**Run E2E tests**:
-- Use **e2e MCP tool** to run tests: `e2e(searchTerms=["feature-name"])`
+- Use **end-to-end MCP tool** to run tests: `end-to-end(searchTerms=["feature-name"])`
- **ALL tests MUST pass with ZERO failures to approve**
- **Verify ZERO console errors** during test execution
- **Verify ZERO network errors** (no unexpected 4xx/5xx responses)
@@ -185,7 +185,7 @@ Don't use `git add -A` or `git add .`
- `[requirements]` — Requirements clarity, acceptance criteria, task description
- `[code]` — Code patterns, rules, architecture guidance
- Examples: `[system] e2e MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`
+ Examples: `[system] end-to-end MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`
⚠️ Your session terminates IMMEDIATELY after calling CompleteWork.
diff --git a/.claude/commands/process/review-task.md b/.claude/commands/process/review-task.md
index ea341ebad..b4447fa1e 100644
--- a/.claude/commands/process/review-task.md
+++ b/.claude/commands/process/review-task.md
@@ -155,7 +155,7 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**Collaborate with your team**: For complex problems or design questions, engage in conversation with engineers or other reviewers. Better solutions often emerge from team collaboration.
-**STEP 3**: Run validation tools in parallel (format, test, inspect)
+**STEP 3**: Run validation tools
**Zero tolerance for issues**:
- We deploy to production after review - quality is non-negotiable.
@@ -169,18 +169,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For backend-reviewer** (validates all self-contained systems to catch cross-self-contained-system breakage):
-1. Run **build** first for all self-contained systems (backend AND frontend):
- - Use execute_command MCP tool: `command: "build"`.
- - DO NOT run in parallel.
+1. Run **build**, **format**, **test**, **inspect** following the global tool execution instructions.
-2. Run **format**, **test**, **inspect** in parallel:
- - Call all three MCP tools in a single message:
- - `execute_command(command: "format", noBuild: true)`
- - `execute_command(command: "test", noBuild: true)`
- - `execute_command(command: "inspect", noBuild: true)`
- - All three run simultaneously and return together.
-
-3. Handle validation results:
+2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Frontend-engineer..."):
- REJECT if backend failures found (Core/, Api/, Tests/, Database/).
@@ -189,13 +180,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For frontend-reviewer** (validates frontend only):
-1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.
-
-2. Run **format** for all self-contained systems: `execute_command(command: "format", frontend: true)`.
+1. Run **build**, **format**, **inspect** for frontend following the global tool execution instructions.
-3. Run **inspect** for all self-contained systems: `execute_command(command: "inspect", frontend: true)`.
-
-4. Handle validation results:
+2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Backend-engineer..."):
- REJECT if frontend failures found (WebApp/).
@@ -204,11 +191,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For qa-reviewer** (validates E2E tests):
-1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.
-
-2. Run **e2e** tests (run in background, monitor output).
+1. Run **build** for frontend, then run **end-to-end** tests following the global tool execution instructions.
-3. REJECT if ANY failures found (zero tolerance).
+2. REJECT if ANY failures found (zero tolerance).
**If validation fails with errors unrelated to engineer's changes**:
- Check `git log --oneline` for recent parallel engineer commits.
diff --git a/.claude/hooks/pre-tool-use-bash.sh b/.claude/hooks/pre-tool-use-bash.sh
index 1128d2e07..e74f43906 100755
--- a/.claude/hooks/pre-tool-use-bash.sh
+++ b/.claude/hooks/pre-tool-use-bash.sh
@@ -18,7 +18,7 @@ case "$cmd" in
*"npm run format"*) echo "❌ Use **format MCP tool** instead" >&2; exit 2 ;;
*"npm test"*) echo "❌ Use **test MCP tool** instead" >&2; exit 2 ;;
*"npm run build"*) echo "❌ Use **build MCP tool** instead" >&2; exit 2 ;;
- *"npx playwright test"*) echo "❌ Use **e2e MCP tool** instead" >&2; exit 2 ;;
+ *"npx playwright test"*) echo "❌ Use **end-to-end MCP tool** instead" >&2; exit 2 ;;
*"docker"*) echo "❌ Docker not allowed. Use **watch MCP tool** for Aspire/migrations" >&2; exit 2 ;;
*) exit 0 ;;
esac
diff --git a/.claude/rules/end-to-end-tests/end-to-end-tests.md b/.claude/rules/end-to-end-tests/end-to-end-tests.md
index 5ccd0b38d..515fdb5b0 100644
--- a/.claude/rules/end-to-end-tests/end-to-end-tests.md
+++ b/.claude/rules/end-to-end-tests/end-to-end-tests.md
@@ -9,18 +9,18 @@ These rules outline the structure, patterns, and best practices for writing end-
## Implementation
-1. Use the **e2e MCP tool** to run end-to-end tests with these options:
+1. Use the **end-to-end MCP tool** to run end-to-end tests with these options:
- Test filtering: smoke tests only, specific browser, search terms
- Change scoping: last failed tests, only changed tests
- Flaky test detection: repeat tests, retry on failure, stop on first failure
- Performance: debug timing to see step execution times
- - **Note**: The **e2e MCP tool** always runs with quiet mode automatically
+ - **Note**: The **end-to-end MCP tool** always runs with quiet mode automatically
2. Test Search and Filtering:
- Search by test tags: smoke, comprehensive
- Search by test content: find tests containing specific text
- Search by filename: find specific test files
- - Multiple search terms: `e2e(searchTerms=["user", "management"])`
+ - Multiple search terms: `end-to-end(searchTerms=["user", "management"])`
- The tool automatically detects which self-contained systems contain matching tests and only runs those
3. Test-Driven Debugging Process:
diff --git a/.cursor/rules/end-to-end-tests/end-to-end-tests.mdc b/.cursor/rules/end-to-end-tests/end-to-end-tests.mdc
index dae9d476e..2fa22a4c8 100644
--- a/.cursor/rules/end-to-end-tests/end-to-end-tests.mdc
+++ b/.cursor/rules/end-to-end-tests/end-to-end-tests.mdc
@@ -9,18 +9,18 @@ These rules outline the structure, patterns, and best practices for writing end-
## Implementation
-1. Use the **e2e MCP tool** to run end-to-end tests with these options:
+1. Use the **end-to-end MCP tool** to run end-to-end tests with these options:
- Test filtering: smoke tests only, specific browser, search terms
- Change scoping: last failed tests, only changed tests
- Flaky test detection: repeat tests, retry on failure, stop on first failure
- Performance: debug timing to see step execution times
- - **Note**: The **e2e MCP tool** always runs with quiet mode automatically
+ - **Note**: The **end-to-end MCP tool** always runs with quiet mode automatically
2. Test Search and Filtering:
- Search by test tags: smoke, comprehensive
- Search by test content: find tests containing specific text
- Search by filename: find specific test files
- - Multiple search terms: `e2e(searchTerms=["user", "management"])`
+ - Multiple search terms: `end-to-end(searchTerms=["user", "management"])`
- The tool automatically detects which self-contained systems contain matching tests and only runs those
3. Test-Driven Debugging Process:
diff --git a/.cursor/rules/workflows/process/implement-end-to-end-tests.mdc b/.cursor/rules/workflows/process/implement-end-to-end-tests.mdc
index 9648b6664..8e0fa5b29 100644
--- a/.cursor/rules/workflows/process/implement-end-to-end-tests.mdc
+++ b/.cursor/rules/workflows/process/implement-end-to-end-tests.mdc
@@ -129,9 +129,9 @@ Research the codebase to find similar E2E test implementations. Look for existin
**STEP 7**: Run tests and verify they pass
-- Use **e2e MCP tool** to run your tests
-- Start with smoke tests: `e2e(smoke=true)`
-- Then run comprehensive tests with search terms: `e2e(searchTerms=["feature-name"])`
+- Use **end-to-end MCP tool** to run your tests
+- Start with smoke tests: `end-to-end(smoke=true)`
+- Then run comprehensive tests with search terms: `end-to-end(searchTerms=["feature-name"])`
- All tests must pass before proceeding
- If tests fail: Fix them and run again (don't proceed with failing tests)
diff --git a/.cursor/rules/workflows/process/review-end-to-end-tests.mdc b/.cursor/rules/workflows/process/review-end-to-end-tests.mdc
index 7bb9b8add..39488c167 100644
--- a/.cursor/rules/workflows/process/review-end-to-end-tests.mdc
+++ b/.cursor/rules/workflows/process/review-end-to-end-tests.mdc
@@ -85,7 +85,7 @@ You are reviewing: **{{{title}}}**
- The tool starts .NET Aspire at https://localhost:9000
**Run E2E tests**:
-- Use **e2e MCP tool** to run tests: `e2e(searchTerms=["feature-name"])`
+- Use **end-to-end MCP tool** to run tests: `end-to-end(searchTerms=["feature-name"])`
- **ALL tests MUST pass with ZERO failures to approve**
- **Verify ZERO console errors** during test execution
- **Verify ZERO network errors** (no unexpected 4xx/5xx responses)
@@ -182,7 +182,7 @@ Don't use `git add -A` or `git add .`
- `[requirements]` — Requirements clarity, acceptance criteria, task description
- `[code]` — Code patterns, rules, architecture guidance
- Examples: `[system] e2e MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`
+ Examples: `[system] end-to-end MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`
⚠️ Your session terminates IMMEDIATELY after calling CompleteWork.
diff --git a/.cursor/rules/workflows/process/review-task.mdc b/.cursor/rules/workflows/process/review-task.mdc
index 30bf3c6bd..173bf2d40 100644
--- a/.cursor/rules/workflows/process/review-task.mdc
+++ b/.cursor/rules/workflows/process/review-task.mdc
@@ -152,7 +152,7 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**Collaborate with your team**: For complex problems or design questions, engage in conversation with engineers or other reviewers. Better solutions often emerge from team collaboration.
-**STEP 3**: Run validation tools in parallel (format, test, inspect)
+**STEP 3**: Run validation tools
**Zero tolerance for issues**:
- We deploy to production after review - quality is non-negotiable.
@@ -166,18 +166,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For backend-reviewer** (validates all self-contained systems to catch cross-self-contained-system breakage):
-1. Run **build** first for all self-contained systems (backend AND frontend):
- - Use execute_command MCP tool: `command: "build"`.
- - DO NOT run in parallel.
+1. Run **build**, **format**, **test**, **inspect** following the global tool execution instructions.
-2. Run **format**, **test**, **inspect** in parallel:
- - Call all three MCP tools in a single message:
- - `execute_command(command: "format", noBuild: true)`
- - `execute_command(command: "test", noBuild: true)`
- - `execute_command(command: "inspect", noBuild: true)`
- - All three run simultaneously and return together.
-
-3. Handle validation results:
+2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Frontend-engineer..."):
- REJECT if backend failures found (Core/, Api/, Tests/, Database/).
@@ -186,13 +177,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For frontend-reviewer** (validates frontend only):
-1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.
-
-2. Run **format** for all self-contained systems: `execute_command(command: "format", frontend: true)`.
+1. Run **build**, **format**, **inspect** for frontend following the global tool execution instructions.
-3. Run **inspect** for all self-contained systems: `execute_command(command: "inspect", frontend: true)`.
-
-4. Handle validation results:
+2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Backend-engineer..."):
- REJECT if frontend failures found (WebApp/).
@@ -201,11 +188,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For qa-reviewer** (validates E2E tests):
-1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.
-
-2. Run **e2e** tests (run in background, monitor output).
+1. Run **build** for frontend, then run **end-to-end** tests following the global tool execution instructions.
-3. REJECT if ANY failures found (zero tolerance).
+2. REJECT if ANY failures found (zero tolerance).
**If validation fails with errors unrelated to engineer's changes**:
- Check `git log --oneline` for recent parallel engineer commits.
diff --git a/.github/copilot/rules/end-to-end-tests/end-to-end-tests.md b/.github/copilot/rules/end-to-end-tests/end-to-end-tests.md
index b4ecf0cf9..a79b1d924 100644
--- a/.github/copilot/rules/end-to-end-tests/end-to-end-tests.md
+++ b/.github/copilot/rules/end-to-end-tests/end-to-end-tests.md
@@ -4,18 +4,18 @@ These rules outline the structure, patterns, and best practices for writing end-
## Implementation
-1. Use the **e2e MCP tool** to run end-to-end tests with these options:
+1. Use the **end-to-end MCP tool** to run end-to-end tests with these options:
- Test filtering: smoke tests only, specific browser, search terms
- Change scoping: last failed tests, only changed tests
- Flaky test detection: repeat tests, retry on failure, stop on first failure
- Performance: debug timing to see step execution times
- - **Note**: The **e2e MCP tool** always runs with quiet mode automatically
+ - **Note**: The **end-to-end MCP tool** always runs with quiet mode automatically
2. Test Search and Filtering:
- Search by test tags: smoke, comprehensive
- Search by test content: find tests containing specific text
- Search by filename: find specific test files
- - Multiple search terms: `e2e(searchTerms=["user", "management"])`
+ - Multiple search terms: `end-to-end(searchTerms=["user", "management"])`
- The tool automatically detects which self-contained systems contain matching tests and only runs those
3. Test-Driven Debugging Process:
diff --git a/.github/copilot/workflows/process/implement-end-to-end-tests.md b/.github/copilot/workflows/process/implement-end-to-end-tests.md
index a02ba703c..78a7cb758 100644
--- a/.github/copilot/workflows/process/implement-end-to-end-tests.md
+++ b/.github/copilot/workflows/process/implement-end-to-end-tests.md
@@ -124,9 +124,9 @@ Research the codebase to find similar E2E test implementations. Look for existin
**STEP 7**: Run tests and verify they pass
-- Use **e2e MCP tool** to run your tests
-- Start with smoke tests: `e2e(smoke=true)`
-- Then run comprehensive tests with search terms: `e2e(searchTerms=["feature-name"])`
+- Use **end-to-end MCP tool** to run your tests
+- Start with smoke tests: `end-to-end(smoke=true)`
+- Then run comprehensive tests with search terms: `end-to-end(searchTerms=["feature-name"])`
- All tests must pass before proceeding
- If tests fail: Fix them and run again (don't proceed with failing tests)
diff --git a/.github/copilot/workflows/process/review-end-to-end-tests.md b/.github/copilot/workflows/process/review-end-to-end-tests.md
index e9f16a39e..6dd992e7a 100644
--- a/.github/copilot/workflows/process/review-end-to-end-tests.md
+++ b/.github/copilot/workflows/process/review-end-to-end-tests.md
@@ -80,7 +80,7 @@ You are reviewing: **{{{title}}}**
- The tool starts .NET Aspire at https://localhost:9000
**Run E2E tests**:
-- Use **e2e MCP tool** to run tests: `e2e(searchTerms=["feature-name"])`
+- Use **end-to-end MCP tool** to run tests: `end-to-end(searchTerms=["feature-name"])`
- **ALL tests MUST pass with ZERO failures to approve**
- **Verify ZERO console errors** during test execution
- **Verify ZERO network errors** (no unexpected 4xx/5xx responses)
@@ -177,7 +177,7 @@ Don't use `git add -A` or `git add .`
- `[requirements]` — Requirements clarity, acceptance criteria, task description
- `[code]` — Code patterns, rules, architecture guidance
- Examples: `[system] e2e MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`
+ Examples: `[system] end-to-end MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`
⚠️ Your session terminates IMMEDIATELY after calling CompleteWork.
diff --git a/.github/copilot/workflows/process/review-task.md b/.github/copilot/workflows/process/review-task.md
index 7d55d433a..92f5bf94a 100644
--- a/.github/copilot/workflows/process/review-task.md
+++ b/.github/copilot/workflows/process/review-task.md
@@ -147,7 +147,7 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**Collaborate with your team**: For complex problems or design questions, engage in conversation with engineers or other reviewers. Better solutions often emerge from team collaboration.
-**STEP 3**: Run validation tools in parallel (format, test, inspect)
+**STEP 3**: Run validation tools
**Zero tolerance for issues**:
- We deploy to production after review - quality is non-negotiable.
@@ -161,18 +161,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For backend-reviewer** (validates all self-contained systems to catch cross-self-contained-system breakage):
-1. Run **build** first for all self-contained systems (backend AND frontend):
- - Use execute_command MCP tool: `command: "build"`.
- - DO NOT run in parallel.
+1. Run **build**, **format**, **test**, **inspect** following the global tool execution instructions.
-2. Run **format**, **test**, **inspect** in parallel:
- - Call all three MCP tools in a single message:
- - `execute_command(command: "format", noBuild: true)`
- - `execute_command(command: "test", noBuild: true)`
- - `execute_command(command: "inspect", noBuild: true)`
- - All three run simultaneously and return together.
-
-3. Handle validation results:
+2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Frontend-engineer..."):
- REJECT if backend failures found (Core/, Api/, Tests/, Database/).
@@ -181,13 +172,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For frontend-reviewer** (validates frontend only):
-1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.
-
-2. Run **format** for all self-contained systems: `execute_command(command: "format", frontend: true)`.
+1. Run **build**, **format**, **inspect** for frontend following the global tool execution instructions.
-3. Run **inspect** for all self-contained systems: `execute_command(command: "inspect", frontend: true)`.
-
-4. Handle validation results:
+2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Backend-engineer..."):
- REJECT if frontend failures found (WebApp/).
@@ -196,11 +183,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For qa-reviewer** (validates E2E tests):
-1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.
-
-2. Run **e2e** tests (run in background, monitor output).
+1. Run **build** for frontend, then run **end-to-end** tests following the global tool execution instructions.
-3. REJECT if ANY failures found (zero tolerance).
+2. REJECT if ANY failures found (zero tolerance).
**If validation fails with errors unrelated to engineer's changes**:
- Check `git log --oneline` for recent parallel engineer commits.
diff --git a/.mcp.json b/.mcp.json
index 5ccf692f2..399d235a0 100644
--- a/.mcp.json
+++ b/.mcp.json
@@ -3,6 +3,10 @@
"developer-cli": {
"command": "dotnet",
"args": ["run", "--project", "developer-cli", "mcp"]
+ },
+ "aspire": {
+ "url": "http://localhost:9096/mcp",
+ "type": "http"
}
}
}
\ No newline at end of file
diff --git a/.windsurf/rules/end-to-end-tests/end-to-end-tests.md b/.windsurf/rules/end-to-end-tests/end-to-end-tests.md
index 5a4eef339..f27b74dd6 100644
--- a/.windsurf/rules/end-to-end-tests/end-to-end-tests.md
+++ b/.windsurf/rules/end-to-end-tests/end-to-end-tests.md
@@ -10,18 +10,18 @@ These rules outline the structure, patterns, and best practices for writing end-
## Implementation
-1. Use the **e2e MCP tool** to run end-to-end tests with these options:
+1. Use the **end-to-end MCP tool** to run end-to-end tests with these options:
- Test filtering: smoke tests only, specific browser, search terms
- Change scoping: last failed tests, only changed tests
- Flaky test detection: repeat tests, retry on failure, stop on first failure
- Performance: debug timing to see step execution times
- - **Note**: The **e2e MCP tool** always runs with quiet mode automatically
+ - **Note**: The **end-to-end MCP tool** always runs with quiet mode automatically
2. Test Search and Filtering:
- Search by test tags: smoke, comprehensive
- Search by test content: find tests containing specific text
- Search by filename: find specific test files
- - Multiple search terms: `e2e(searchTerms=["user", "management"])`
+ - Multiple search terms: `end-to-end(searchTerms=["user", "management"])`
- The tool automatically detects which self-contained systems contain matching tests and only runs those
3. Test-Driven Debugging Process:
diff --git a/.windsurf/workflows/process/implement-end-to-end-tests.md b/.windsurf/workflows/process/implement-end-to-end-tests.md
index 9fb327ec8..53960c1f7 100644
--- a/.windsurf/workflows/process/implement-end-to-end-tests.md
+++ b/.windsurf/workflows/process/implement-end-to-end-tests.md
@@ -129,9 +129,9 @@ Research the codebase to find similar E2E test implementations. Look for existin
**STEP 7**: Run tests and verify they pass
-- Use **e2e MCP tool** to run your tests
-- Start with smoke tests: `e2e(smoke=true)`
-- Then run comprehensive tests with search terms: `e2e(searchTerms=["feature-name"])`
+- Use **end-to-end MCP tool** to run your tests
+- Start with smoke tests: `end-to-end(smoke=true)`
+- Then run comprehensive tests with search terms: `end-to-end(searchTerms=["feature-name"])`
- All tests must pass before proceeding
- If tests fail: Fix them and run again (don't proceed with failing tests)
diff --git a/.windsurf/workflows/process/review-end-to-end-tests.md b/.windsurf/workflows/process/review-end-to-end-tests.md
index a3d88d42c..49954bfb5 100644
--- a/.windsurf/workflows/process/review-end-to-end-tests.md
+++ b/.windsurf/workflows/process/review-end-to-end-tests.md
@@ -85,7 +85,7 @@ You are reviewing: **{{{title}}}**
- The tool starts .NET Aspire at https://localhost:9000
**Run E2E tests**:
-- Use **e2e MCP tool** to run tests: `e2e(searchTerms=["feature-name"])`
+- Use **end-to-end MCP tool** to run tests: `end-to-end(searchTerms=["feature-name"])`
- **ALL tests MUST pass with ZERO failures to approve**
- **Verify ZERO console errors** during test execution
- **Verify ZERO network errors** (no unexpected 4xx/5xx responses)
@@ -182,7 +182,7 @@ Don't use `git add -A` or `git add .`
- `[requirements]` — Requirements clarity, acceptance criteria, task description
- `[code]` — Code patterns, rules, architecture guidance
- Examples: `[system] e2e MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`
+ Examples: `[system] end-to-end MCP tool reported test passed but it actually failed` or `[requirements] Feature requirements didn't specify mobile viewport testing`
⚠️ Your session terminates IMMEDIATELY after calling CompleteWork.
diff --git a/.windsurf/workflows/process/review-task.md b/.windsurf/workflows/process/review-task.md
index 1c73b7ae2..dfc9af4e0 100644
--- a/.windsurf/workflows/process/review-task.md
+++ b/.windsurf/workflows/process/review-task.md
@@ -152,7 +152,7 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**Collaborate with your team**: For complex problems or design questions, engage in conversation with engineers or other reviewers. Better solutions often emerge from team collaboration.
-**STEP 3**: Run validation tools in parallel (format, test, inspect)
+**STEP 3**: Run validation tools
**Zero tolerance for issues**:
- We deploy to production after review - quality is non-negotiable.
@@ -166,18 +166,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For backend-reviewer** (validates all self-contained systems to catch cross-self-contained-system breakage):
-1. Run **build** first for all self-contained systems (backend AND frontend):
- - Use execute_command MCP tool: `command: "build"`.
- - DO NOT run in parallel.
+1. Run **build**, **format**, **test**, **inspect** following the global tool execution instructions.
-2. Run **format**, **test**, **inspect** in parallel:
- - Call all three MCP tools in a single message:
- - `execute_command(command: "format", noBuild: true)`
- - `execute_command(command: "test", noBuild: true)`
- - `execute_command(command: "inspect", noBuild: true)`
- - All three run simultaneously and return together.
-
-3. Handle validation results:
+2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Frontend-engineer..."):
- REJECT if backend failures found (Core/, Api/, Tests/, Database/).
@@ -186,13 +177,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For frontend-reviewer** (validates frontend only):
-1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.
-
-2. Run **format** for all self-contained systems: `execute_command(command: "format", frontend: true)`.
+1. Run **build**, **format**, **inspect** for frontend following the global tool execution instructions.
-3. Run **inspect** for all self-contained systems: `execute_command(command: "inspect", frontend: true)`.
-
-4. Handle validation results:
+2. Handle validation results:
- **If NO parallel work notification in request**: REJECT if ANY failures found (zero tolerance).
- **If parallel work notification present** (e.g., "⚠️ Parallel Work: Backend-engineer..."):
- REJECT if frontend failures found (WebApp/).
@@ -201,11 +188,9 @@ The [feature] plan was AI-generated by tech-lead in a few minutes after intervie
**For qa-reviewer** (validates E2E tests):
-1. Run **build** for frontend: `execute_command(command: "build", frontend: true)`.
-
-2. Run **e2e** tests (run in background, monitor output).
+1. Run **build** for frontend, then run **end-to-end** tests following the global tool execution instructions.
-3. REJECT if ANY failures found (zero tolerance).
+2. REJECT if ANY failures found (zero tolerance).
**If validation fails with errors unrelated to engineer's changes**:
- Check `git log --oneline` for recent parallel engineer commits.
diff --git a/AGENTS.md b/AGENTS.md
index 7b76bae08..08b2dd3bf 100644
--- a/AGENTS.md
+++ b/AGENTS.md
@@ -22,10 +22,40 @@ When working on tasks, follow any specific workflow instructions provided for yo
3. Develop a clear implementation plan.
4. Follow established patterns and conventions.
5. Use MCP tools for building, testing, and formatting.
- - Use the **build**, **test**, **format**, **inspect**, **run**, and **e2e** MCP tools
- - **Important**: Always use the MCP **execute** tool instead of running `dotnet build`, `dotnet test`, `dotnet format`, or equivalent `npm` commands directly
- - **Important**: The **run** MCP tool starts the Aspire AppHost and runs database migrations at https://localhost:9000. The tool runs in the background, so you can continue working while it starts. Use run if you suspect the database needs to be migrated, if you need to restart the server for any reason, or if it's not running.
- - **MCP Server Setup**: See [.mcp.json](/.mcp.json) for MCP server configuration. For Claude Code, run `claude config set enableAllProjectMcpServers true` once to enable project-scoped MCP servers.
+
+ Always use MCP tools instead of running `dotnet build`, `dotnet test`, `dotnet format`, or equivalent `npm` and `npx playwright` commands directly.
+
+ **Execution Order** (mandatory):
+ 1. Run **build** first: `execute_command(command='build', backend=true, frontend=true)`
+ 2. Run remaining tools with `noBuild=true`
+
+ **Slow Operations:**
+ - Aspire restart
+ - Backend format
+ - Backend inspect
+ - End-to-end tests
+
+ **Fast Operations:**
+ - Frontend format, inspect
+ - Backend test
+
+ **Parallelization rule:**
+ - If running only fast operations → run sequentially
+ - If running any slow operation → run EVERYTHING in parallel Task agents
+
+ **Example: running all tools after build (do NOT use run_in_background):**
+ ```
+ Task(subagent_type: "general-purpose", prompt: "Restart Aspire: mcp__developer-cli__run()", run_in_background: false)
+ Task(subagent_type: "general-purpose", prompt: "Test backend: mcp__developer-cli__execute_command(command='test', backend=true, noBuild=true)", run_in_background: false)
+ Task(subagent_type: "general-purpose", prompt: "Format backend: mcp__developer-cli__execute_command(command='format', backend=true, noBuild=true)", run_in_background: false)
+ Task(subagent_type: "general-purpose", prompt: "Inspect backend: mcp__developer-cli__execute_command(command='inspect', backend=true, noBuild=true)", run_in_background: false)
+ Task(subagent_type: "general-purpose", prompt: "Run e2e: mcp__developer-cli__end_to_end(browser='chromium', waitForAspire=true)", run_in_background: false)
+ ```
+ Format, inspect, and test run while Aspire starts. End-to-end tests use `waitForAspire=true` to wait until Aspire is ready.
+
+ **About Aspire**: The **run** MCP tool starts the Aspire AppHost at https://localhost:9000. Restart it when the backend has changed, when frontend hot reload stops working, or when it is not running.
+
+ **MCP Server Setup**: See [.mcp.json](/.mcp.json) for MCP server configuration. For Claude Code, run `claude config set enableAllProjectMcpServers true` once to enable project-scoped MCP servers.
**Critical**: If you do NOT see the mentioned developer-cli MCP tool, tell the user. Do NOT just ignore that you cannot find them, and fall back to other tools.
diff --git a/application/AppHost/Properties/launchSettings.json b/application/AppHost/Properties/launchSettings.json
index 5d4ac7e40..77e1a50d8 100644
--- a/application/AppHost/Properties/launchSettings.json
+++ b/application/AppHost/Properties/launchSettings.json
@@ -9,7 +9,10 @@
"ASPNETCORE_ENVIRONMENT": "Development",
"DOTNET_ENVIRONMENT": "Development",
"DOTNET_DASHBOARD_OTLP_ENDPOINT_URL": "https://localhost:9097",
- "DOTNET_RESOURCE_SERVICE_ENDPOINT_URL": "https://localhost:9098"
+ "DOTNET_RESOURCE_SERVICE_ENDPOINT_URL": "https://localhost:9098",
+ "ASPIRE_DASHBOARD_MCP_ENDPOINT_URL": "http://localhost:9096/mcp",
+ "ASPIRE_ALLOW_UNSECURED_TRANSPORT": "true",
+ "Dashboard__Mcp__AuthMode": "Unsecured"
}
}
}
diff --git a/application/PlatformPlatform.slnx.DotSettings b/application/PlatformPlatform.slnx.DotSettings
index ae7384c48..4f38c1f5d 100644
--- a/application/PlatformPlatform.slnx.DotSettings
+++ b/application/PlatformPlatform.slnx.DotSettings
@@ -106,7 +106,6 @@
0
CHOP_IF_LONG
False
- E2E
True
True
True
diff --git a/developer-cli/Commands/End2EndCommand.cs b/developer-cli/Commands/End2EndCommand.cs
index 616eed80f..709af13d8 100644
--- a/developer-cli/Commands/End2EndCommand.cs
+++ b/developer-cli/Commands/End2EndCommand.cs
@@ -34,6 +34,7 @@ public class End2EndCommand : Command
var stopOnFirstFailureOption = new Option("--stop-on-first-failure", "-x") { Description = "Stop after the first failure" };
var uiOption = new Option("--ui") { Description = "Run tests in interactive UI mode with time-travel debugging" };
var workersOption = new Option("--workers", "-w") { Description = "Number of worker processes to use for running tests" };
+ var waitForAspireOption = new Option("--wait-for-aspire") { Description = "Wait for Aspire to start (retries server check up to 50 seconds)" };
Arguments.Add(searchTermsArgument);
Options.Add(browserOption);
@@ -54,6 +55,7 @@ public class End2EndCommand : Command
Options.Add(stopOnFirstFailureOption);
Options.Add(uiOption);
Options.Add(workersOption);
+ Options.Add(waitForAspireOption);
// SetHandler only supports up to 8 parameters, so we use SetAction for this complex command
SetAction(parseResult => Execute(
@@ -75,7 +77,8 @@ public class End2EndCommand : Command
parseResult.GetValue(smokeOption),
parseResult.GetValue(stopOnFirstFailureOption),
parseResult.GetValue(uiOption),
- parseResult.GetValue(workersOption)
+ parseResult.GetValue(workersOption),
+ parseResult.GetValue(waitForAspireOption)
)
);
}
@@ -101,7 +104,8 @@ private static void Execute(
bool smoke,
bool stopOnFirstFailure,
bool ui,
- int? workers)
+ int? workers,
+ bool waitForAspire)
{
Prerequisite.Ensure(Prerequisite.Node);
@@ -113,7 +117,7 @@ private static void Execute(
}
AnsiConsole.MarkupLine("[blue]Checking server availability...[/]");
- CheckWebsiteAccessibility();
+ CheckWebsiteAccessibility(waitForAspire);
PlaywrightInstaller.EnsurePlaywrightBrowsers();
@@ -341,25 +345,36 @@ private static bool RunTestsForSystem(
return !testsFailed;
}
- private static void CheckWebsiteAccessibility()
+ private static void CheckWebsiteAccessibility(bool waitForAspire)
{
- try
+ var maxRetries = waitForAspire ? 10 : 1;
+ var retryDelaySeconds = 5;
+
+ for (var attempt = 1; attempt <= maxRetries; attempt++)
{
- using var httpClient = new HttpClient();
- httpClient.Timeout = TimeSpan.FromSeconds(5);
+ try
+ {
+ using var httpClient = new HttpClient();
+ httpClient.Timeout = TimeSpan.FromSeconds(5);
- var response = httpClient.Send(new HttpRequestMessage(HttpMethod.Head, BaseUrl));
+ var response = httpClient.Send(new HttpRequestMessage(HttpMethod.Head, BaseUrl));
- if (response.IsSuccessStatusCode)
+ if (response.IsSuccessStatusCode)
+ {
+ AnsiConsole.MarkupLine($"[green]Server is accessible at {BaseUrl}[/]");
+ return;
+ }
+ }
+ catch
{
- AnsiConsole.MarkupLine($"[green]Server is accessible at {BaseUrl}[/]");
- return;
+ // Retry if waiting for Aspire and not the last attempt
+ if (waitForAspire && attempt < maxRetries)
+ {
+ AnsiConsole.MarkupLine($"[yellow]Server not ready yet, retrying in {retryDelaySeconds} seconds... (attempt {attempt}/{maxRetries})[/]");
+ Thread.Sleep(TimeSpan.FromSeconds(retryDelaySeconds));
+ }
}
}
- catch
- {
- // Fall through to error handling
- }
AnsiConsole.MarkupLine($"[red]Server is not accessible at {BaseUrl}[/]");
AnsiConsole.MarkupLine($"[yellow]Please start AppHost in your IDE before running '{Configuration.AliasName} e2e'[/]");
diff --git a/developer-cli/Commands/McpCommand.cs b/developer-cli/Commands/McpCommand.cs
index e5c7a92f0..284dfae92 100644
--- a/developer-cli/Commands/McpCommand.cs
+++ b/developer-cli/Commands/McpCommand.cs
@@ -137,10 +137,12 @@ public static string Run()
[McpServerTool]
[Description("Run end-to-end tests")]
- public static async Task E2E(
+ public static async Task EndToEnd(
[Description("Search terms")] string[]? searchTerms = null,
[Description("Browser")] string browser = "all",
- [Description("Smoke only")] bool smoke = false)
+ [Description("Smoke only")] bool smoke = false,
+ [Description("Wait for Aspire to start (retries server check up to 50 seconds)")]
+ bool waitForAspire = false)
{
var args = new List { "e2e", "--quiet" };
if (searchTerms is { Length: > 0 }) args.AddRange(searchTerms);
@@ -151,6 +153,7 @@ public static async Task E2E(
}
if (smoke) args.Add("--smoke");
+ if (waitForAspire) args.Add("--wait-for-aspire");
return await ExecuteCliCommandAsync(args.ToArray());
}
diff --git a/developer-cli/DeveloperCli.slnx.DotSettings b/developer-cli/DeveloperCli.slnx.DotSettings
deleted file mode 100644
index 5720f4d4c..000000000
--- a/developer-cli/DeveloperCli.slnx.DotSettings
+++ /dev/null
@@ -1,115 +0,0 @@
-
- HINT
- HINT
- HINT
- HINT
- HINT
- HINT
- HINT
- HINT
- HINT
- HINT
- HINT
- HINT
- True
- <?xml version="1.0" encoding="utf-16"?><Profile name=".NET only"><CppCodeStyleCleanupDescriptor /><CSReorderTypeMembers>True</CSReorderTypeMembers><CSCodeStyleAttributes ArrangeVarStyle="True" ArrangeTypeAccessModifier="True" ArrangeTypeMemberAccessModifier="True" SortModifiers="True" ArrangeArgumentsStyle="True" RemoveRedundantParentheses="True" AddMissingParentheses="True" ArrangeBraces="True" ArrangeAttributes="True" ArrangeCodeBodyStyle="True" ArrangeTrailingCommas="True" ArrangeObjectCreation="True" ArrangeDefaultValue="True" ArrangeNamespaces="True" ArrangeNullCheckingPattern="True" /><RemoveCodeRedundancies>True</RemoveCodeRedundancies><CSUseAutoProperty>True</CSUseAutoProperty><CSMakeFieldReadonly>True</CSMakeFieldReadonly><CSMakeAutoPropertyGetOnly>True</CSMakeAutoPropertyGetOnly><CSArrangeQualifiers>True</CSArrangeQualifiers><CSFixBuiltinTypeReferences>True</CSFixBuiltinTypeReferences><CSOptimizeUsings><OptimizeUsings>True</OptimizeUsings></CSOptimizeUsings><CSShortenReferences>True</CSShortenReferences><CSReformatCode>True</CSReformatCode><CSharpFormatDocComments>True</CSharpFormatDocComments><IDEA_SETTINGS><profile version="1.0">
- <option name="myName" value=".NET only" />
- <inspection_tool class="ES6ShorthandObjectProperty" enabled="false" level="WARNING" enabled_by_default="false" />
- <inspection_tool class="JSArrowFunctionBracesCanBeRemoved" enabled="false" level="WARNING" enabled_by_default="false" />
- <inspection_tool class="JSPrimitiveTypeWrapperUsage" enabled="false" level="WARNING" enabled_by_default="false" />
- <inspection_tool class="JSRemoveUnnecessaryParentheses" enabled="false" level="WARNING" enabled_by_default="false" />
- <inspection_tool class="JSUnnecessarySemicolon" enabled="false" level="WARNING" enabled_by_default="false" />
- <inspection_tool class="TypeScriptExplicitMemberType" enabled="false" level="WARNING" enabled_by_default="false" />
- <inspection_tool class="UnnecessaryContinueJS" enabled="false" level="WARNING" enabled_by_default="false" />
- <inspection_tool class="UnnecessaryLabelJS" enabled="false" level="WARNING" enabled_by_default="false" />
- <inspection_tool class="UnnecessaryLabelOnBreakStatementJS" enabled="false" level="WARNING" enabled_by_default="false" />
- <inspection_tool class="UnnecessaryLabelOnContinueStatementJS" enabled="false" level="WARNING" enabled_by_default="false" />
- <inspection_tool class="UnnecessaryReturnJS" enabled="false" level="WARNING" enabled_by_default="false" />
- <inspection_tool class="WrongPropertyKeyValueDelimiter" enabled="false" level="WEAK WARNING" enabled_by_default="false" />
-</profile></IDEA_SETTINGS><RIDER_SETTINGS><profile>
- <Language id="CSS">
- <Rearrange>false</Rearrange>
- <Reformat>false</Reformat>
- </Language>
- <Language id="EditorConfig">
- <Reformat>false</Reformat>
- </Language>
- <Language id="HTML">
- <Rearrange>false</Rearrange>
- <Reformat>false</Reformat>
- <OptimizeImports>false</OptimizeImports>
- </Language>
- <Language id="HTTP Request">
- <Reformat>false</Reformat>
- </Language>
- <Language id="Handlebars">
- <Reformat>false</Reformat>
- </Language>
- <Language id="Ini">
- <Reformat>false</Reformat>
- </Language>
- <Language id="JSON">
- <Reformat>false</Reformat>
- </Language>
- <Language id="Jade">
- <Reformat>false</Reformat>
- </Language>
- <Language id="JavaScript">
- <Rearrange>false</Rearrange>
- <Reformat>false</Reformat>
- <OptimizeImports>false</OptimizeImports>
- </Language>
- <Language id="Markdown">
- <Reformat>false</Reformat>
- </Language>
- <Language id="Properties">
- <Reformat>false</Reformat>
- </Language>
- <Language id="RELAX-NG">
- <Reformat>false</Reformat>
- </Language>
- <Language id="SQL">
- <Reformat>false</Reformat>
- </Language>
- <Language id="VueExpr">
- <Reformat>false</Reformat>
- </Language>
- <Language id="XML">
- <Rearrange>false</Rearrange>
- <Reformat>true</Reformat>
- <OptimizeImports>false</OptimizeImports>
- </Language>
- <Language id="yaml">
- <Reformat>false</Reformat>
- </Language>
-</profile></RIDER_SETTINGS><XAMLCollapseEmptyTags>False</XAMLCollapseEmptyTags></Profile>
- Required
- Required
- RequiredForMultilineStatement
- Required
- public protected required private file new internal static override abstract sealed virtual extern unsafe volatile async readonly
- 1
- 1
- 1
- True
- 5
- NEVER
- True
- False
- True
- True
- True
- True
- False
- True
- 0
- CHOP_IF_LONG
- False
- True
- True
- True
- True
- True
- True
- True
-
diff --git a/developer-cli/DeveloperCli.slnx.DotSettings b/developer-cli/DeveloperCli.slnx.DotSettings
new file mode 120000
index 000000000..1b364effd
--- /dev/null
+++ b/developer-cli/DeveloperCli.slnx.DotSettings
@@ -0,0 +1 @@
+../application/PlatformPlatform.slnx.DotSettings
\ No newline at end of file