From 7720d1faad8e0aa9f7d294d6ef54cda8df536126 Mon Sep 17 00:00:00 2001 From: Lior Kanfi Date: Wed, 1 Oct 2025 16:47:03 +0300 Subject: [PATCH 01/95] feat: Enhance setup and planning scripts with team directives and constitution handling - Added logic to setup-plan.ps1 to handle constitution and team directives file paths, ensuring they are set in the environment. - Implemented sync_team_ai_directives function in specify_cli to clone or update the team-ai-directives repository. - Updated init command in specify_cli to accept a team-ai-directives repository URL and sync it during project initialization. - Enhanced command templates (implement.md, levelup.md, plan.md, specify.md, tasks.md) to incorporate checks for constitution and team directives. - Created new levelup command to capture learnings and draft knowledge assets post-implementation. - Improved task generation to include execution modes (SYNC/ASYNC) based on the implementation plan. - Added tests for new functionality, including syncing team directives and validating outputs from setup and levelup scripts. --- .github/CODEOWNERS | 2 - .../scripts/create-release-packages.sh | 0 docs/quickstart.md | 94 +++++++++++++++ scripts/bash/check-prerequisites.sh | 12 ++ scripts/bash/common.sh | 1 + scripts/bash/create-new-feature.sh | 71 +++++++++++- scripts/bash/prepare-levelup.sh | 65 +++++++++++ scripts/bash/setup-plan.sh | 49 +++++++- scripts/powershell/create-new-feature.ps1 | 27 +++++ scripts/powershell/prepare-levelup.ps1 | 67 +++++++++++ scripts/powershell/setup-plan.ps1 | 32 ++++++ src/specify_cli/__init__.py | 108 ++++++++++++++++-- templates/commands/implement.md | 22 ++-- templates/commands/levelup.md | 65 +++++++++++ templates/commands/plan.md | 25 ++-- templates/commands/specify.md | 19 ++- templates/commands/tasks.md | 18 ++- templates/plan-template.md | 8 ++ templates/tasks-template.md | 50 ++++---- tests/test_create_new_feature.py | 51 +++++++++ tests/test_prepare_levelup.py | 57 +++++++++ tests/test_setup_plan.py | 61 ++++++++++ tests/test_team_directives.py | 83 ++++++++++++++ 23 files changed, 923 insertions(+), 64 deletions(-) delete mode 100644 .github/CODEOWNERS mode change 100644 => 100755 .github/workflows/scripts/create-release-packages.sh mode change 100644 => 100755 scripts/bash/create-new-feature.sh create mode 100755 scripts/bash/prepare-levelup.sh mode change 100644 => 100755 scripts/bash/setup-plan.sh create mode 100644 scripts/powershell/prepare-levelup.ps1 create mode 100644 templates/commands/levelup.md create mode 100644 tests/test_create_new_feature.py create mode 100644 tests/test_prepare_levelup.py create mode 100644 tests/test_setup_plan.py create mode 100644 tests/test_team_directives.py diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS deleted file mode 100644 index 27fe556c57..0000000000 --- a/.github/CODEOWNERS +++ /dev/null @@ -1,2 +0,0 @@ -# Global code owner -* @localden diff --git a/.github/workflows/scripts/create-release-packages.sh b/.github/workflows/scripts/create-release-packages.sh old mode 100644 new mode 100755 diff --git a/docs/quickstart.md b/docs/quickstart.md index 11d5c1e947..b64f9c6642 100644 --- a/docs/quickstart.md +++ b/docs/quickstart.md @@ -4,6 +4,100 @@ This guide will help you get started with Spec-Driven Development using Spec Kit > NEW: All automation scripts now provide both Bash (`.sh`) and PowerShell (`.ps1`) variants. The `specify` CLI auto-selects based on OS unless you pass `--script sh|ps`. +## Stage 0: Foundation & Setup + +**Goal:** Establish the foundational rules and configure the development environment so every later stage aligns with the project's architectural and security principles. +**Note:** Run these steps in a standard terminal before opening the Intelligent IDE. + +1. **Project Initialization (`/init`)** + **Action:** From the project root, run the Spec Kit `init` command (e.g., `specify init --team-ai-directive https://github.com/your-org/team-ai-directives.git`) to configure local settings and clone the shared `team-ai-directives` modules. + **Purpose:** Creates the handshake that brings the repository into the managed Agentic SDLC ecosystem, wiring credentials, endpoints, and shared knowledge needed for subsequent commands. +2. **Establishing the Constitution (`/constitution`)** + **Action:** Within the IDE, execute `/constitution`, importing relevant modules from `team-ai-directives` and adding any project-specific principles. + **Purpose:** Generates `memory/constitution.md`, the immutable ruleset automatically injected into `/specify`, `/plan`, and other workflows so every response honors project standards. + +**Example Command:** + +``` +/constitution "Assemble the constitution for this service. Import principles from @team/context_modules/principles/v1/stateless_services.md and @team/context_modules/principles/v1/zero_trust_security_model.md. Add the custom principle: 'All public APIs must be versioned.'" +``` + +**Outcome:** The IDE is fully integrated with the Orchestration Hub, and a committed `constitution.md` anchors all future automation. + +## Stage 1: Feature Specification + +**Goal:** Produce a committed `spec.md` that captures the feature's intent, constraints, and acceptance criteria. +**Note:** From Stage 1 onward, all work happens inside the Intelligent IDE with the context automatically assembled by Spec Kit. + +1. **Craft the Directive (`/specify`)** + **Action:** Author a single, comprehensive natural-language directive that blends the issue tracker mission, personas, constraints, and any clarifications. + **Purpose:** Front-load human judgment so the AI can draft an accurate `spec.md` aligned with the constitution. +2. **Execute the Command** + **Action:** Run `/specify` in the IDE; Spec Kit loads `memory/constitution.md`, resolves `@team/...` references against the directives repo, and captures any `@issue-tracker ISSUE-###` reference in the prompt so the resulting spec links back to the originating ticket. + **Purpose:** Generates the structured specification artifact under `specs//spec.md` with shared principles and traceability already in context. +3. **Review and Commit** + **Action:** Perform a macro-review of the generated `spec.md`, refine if needed, then commit it. + **Purpose:** Locks in the requirements that all later stages will honor. + +**Example Command:** + +``` +/specify "Generate the specification for the feature in @issue-tracker ISSUE-123. The target user is the @team/personas/v1/data_analyst.md. The operation must be asynchronous to handle large dashboards. The PDF title must include the dashboard name and an export timestamp." +``` + +**Outcome:** A committed `spec.md` ready to drive planning in Stage 2. + +## Stage 2: Planning & Task Management + +**Goal:** Convert the committed `spec.md` into a human-approved `plan.md` and a synced task list that routes work through the issue tracker. +**Note:** `/plan` and `/tasks` run inside the IDE, reusing the constitution and the locally cloned `team-ai-directives` modules. + +1. **Generate the Plan (`/plan`)** + **Action:** Execute `/plan` with a directive that covers tech stack, risk considerations, testing focus, and any implementation preferences. Spec Kit loads `memory/constitution.md`, references in `team-ai-directives`, and copies the plan template before executing automation. + **Purpose:** Guides the AI in generating a comprehensive and strategically-sound first draft of `plan.md`—front-loading human judgment yields more robust outputs, and the AI produces technical steps with preliminary [SYNC]/[ASYNC] triage suggestions while emitting `plan.md`, `research.md`, `data-model.md`, `quickstart.md`, and contract stubs aligned with the constitution. +2. **Macro-Review and Commit** + **Action:** Review the generated artifacts, adjust as needed, decide [SYNC]/[ASYNC] triage, then commit. + **Purpose:** Locks in an execution strategy that downstream stages must respect. +3. **Sync Tasks (`/tasks`)** + **Action:** Run `/tasks` to transform the validated plan into numbered tasks, ensuring each contract, test, and implementation step is represented. The command requires the committed plan artifacts and will surface gaps if prerequisites are missing. + **Purpose:** Creates `tasks.md` and mirrors it to the issue tracker for execution visibility. + +**Outcome:** A constitution-compliant `plan.md`, supporting design artifacts, and an actionable task list synchronized with project management. + +## Stage 3: Implementation + +**Goal:** Execute the validated plan, honoring the `[SYNC]/[ASYNC]` execution modes and completing every task in `tasks.md`. +**Note:** Use `/implement` within the IDE; the command enforces the TDD order, dependency rules, and execution modes captured in Stages 1-2. + +1. **Execute Tasks (`/implement`)** + **Action:** Run `/implement` to load `plan.md`, `tasks.md`, and supporting artifacts. Follow the phase-by-phase flow, completing tests before implementation and respecting `[SYNC]/[ASYNC]` modes and `[P]` parallel markers. + **Purpose:** Produces production-ready code, marks tasks as `[X]`, and preserves the execution trace for Stage 4. +2. **Review & Validate** + **Action:** Ensure all `[SYNC]` tasks received micro-reviews, all `[ASYNC]` work underwent macro-review, and the test suite passes before moving on. + **Purpose:** Guarantees the feature matches the spec and plan with traceable quality gates. + +**Outcome:** A completed feature branch with passing tests and an updated `tasks.md` documenting execution status and modes. + +## Stage 4: Leveling Up + +**Goal:** Capture best practices from the completed feature, draft a reusable knowledge asset in `team-ai-directives`, and generate traceability notes for the original issue. +**Note:** `/levelup` runs inside the IDE and relies on the locally cloned directives repository from Stage 0. + +1. **Run Level-Up Workflow (`/levelup`)** + **Action:** Invoke `/levelup` with a strategic directive (e.g., highlight what should become reusable). Spec Kit gathers spec/plan/tasks metadata, validates the directives repo, and prompts you to synthesize a knowledge asset plus PR/issue summaries. + **Purpose:** Produces a draft markdown asset under `.specify/memory/team-ai-directives/drafts/`, along with a pull-request description and trace comment for review. +2. **Review & Publish** + **Action:** Inspect the generated asset and summaries. When satisfied, confirm inside `/levelup` to let it create a `levelup/{slug}` branch, commit the asset, push (when remotes are configured), open a PR via `gh pr create` (or emit the command), and post the trace comment (or provide the text if automation is unavailable). + **Purpose:** Ensures lessons learned become part of the team's shared brain and closes the loop with traceability artifacts without manual branching overhead. + +**Example Command:** + +``` +/levelup "Capture the FastAPI error-handling patterns we refined while closing ISSUE-123. Summarize why the retry strategy works, when to apply it, and provide links to the final implementation." +``` + +**Outcome:** A knowledge asset ready for PR, a drafted trace comment for the issue tracker, and clear next steps for team review. + ## The 4-Step Process ### 1. Install Specify diff --git a/scripts/bash/check-prerequisites.sh b/scripts/bash/check-prerequisites.sh index f32b6245ae..d5d96e2c1f 100644 --- a/scripts/bash/check-prerequisites.sh +++ b/scripts/bash/check-prerequisites.sh @@ -112,6 +112,18 @@ if [[ ! -f "$IMPL_PLAN" ]]; then exit 1 fi +if [[ ! -f "$CONTEXT" ]]; then + echo "ERROR: context.md not found in $FEATURE_DIR" >&2 + echo "Run /specify and populate context.md before continuing." >&2 + exit 1 +fi + +if grep -q "\[NEEDS INPUT\]" "$CONTEXT"; then + echo "ERROR: context.md contains unresolved [NEEDS INPUT] markers." >&2 + echo "Update $CONTEXT with current mission, code paths, directives, research, and gateway details before proceeding." >&2 + exit 1 +fi + # Check for tasks.md if required if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2 diff --git a/scripts/bash/common.sh b/scripts/bash/common.sh index 34e5d4bb78..aebebbc54d 100644 --- a/scripts/bash/common.sh +++ b/scripts/bash/common.sh @@ -105,6 +105,7 @@ TASKS='$feature_dir/tasks.md' RESEARCH='$feature_dir/research.md' DATA_MODEL='$feature_dir/data-model.md' QUICKSTART='$feature_dir/quickstart.md' +CONTEXT='$feature_dir/context.md' CONTRACTS_DIR='$feature_dir/contracts' EOF } diff --git a/scripts/bash/create-new-feature.sh b/scripts/bash/create-new-feature.sh old mode 100644 new mode 100755 index 5cb17fabef..5130c9fa11 --- a/scripts/bash/create-new-feature.sh +++ b/scripts/bash/create-new-feature.sh @@ -18,6 +18,8 @@ if [ -z "$FEATURE_DESCRIPTION" ]; then exit 1 fi +TEAM_DIRECTIVES_DIRNAME="team-ai-directives" + # Function to find the repository root by searching for existing project markers find_repo_root() { local dir="$1" @@ -53,6 +55,20 @@ cd "$REPO_ROOT" SPECS_DIR="$REPO_ROOT/specs" mkdir -p "$SPECS_DIR" +CONSTITUTION_FILE="$REPO_ROOT/.specify/memory/constitution.md" +if [ -f "$CONSTITUTION_FILE" ]; then + export SPECIFY_CONSTITUTION="$CONSTITUTION_FILE" +else + CONSTITUTION_FILE="" +fi + +TEAM_DIRECTIVES_DIR="$REPO_ROOT/.specify/memory/$TEAM_DIRECTIVES_DIRNAME" +if [ -d "$TEAM_DIRECTIVES_DIR" ]; then + export SPECIFY_TEAM_DIRECTIVES="$TEAM_DIRECTIVES_DIR" +else + TEAM_DIRECTIVES_DIR="" +fi + HIGHEST=0 if [ -d "$SPECS_DIR" ]; then for dir in "$SPECS_DIR"/*; do @@ -84,14 +100,67 @@ TEMPLATE="$REPO_ROOT/.specify/templates/spec-template.md" SPEC_FILE="$FEATURE_DIR/spec.md" if [ -f "$TEMPLATE" ]; then cp "$TEMPLATE" "$SPEC_FILE"; else touch "$SPEC_FILE"; fi +CONTEXT_TEMPLATE="$REPO_ROOT/.specify/templates/context-template.md" +CONTEXT_FILE="$FEATURE_DIR/context.md" +if [ -f "$CONTEXT_TEMPLATE" ]; then + if command -v sed >/dev/null 2>&1; then + sed "s/\[FEATURE NAME\]/$BRANCH_NAME/" "$CONTEXT_TEMPLATE" > "$CONTEXT_FILE" + else + cp "$CONTEXT_TEMPLATE" "$CONTEXT_FILE" + fi +else + cat <<'EOF' > "$CONTEXT_FILE" +# Feature Context + +## Mission Brief +- **Issue Tracker**: [NEEDS INPUT] +- **Summary**: [NEEDS INPUT] + +## Local Context +- Relevant code paths: + - [NEEDS INPUT] +- Existing dependencies or services touched: + - [NEEDS INPUT] + +## Team Directives +- Referenced modules: + - [NEEDS INPUT] +- Additional guardrails: + - [NEEDS INPUT] + +## External Research & References +- Links or documents: + - [NEEDS INPUT] + +## Gateway Check +- Last verified gateway endpoint: [NEEDS INPUT] +- Verification timestamp (UTC): [NEEDS INPUT] + +## Open Questions +- [NEEDS INPUT] +EOF +fi + # Set the SPECIFY_FEATURE environment variable for the current session export SPECIFY_FEATURE="$BRANCH_NAME" if $JSON_MODE; then - printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM" + printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s","HAS_GIT":"%s","CONSTITUTION":"%s","TEAM_DIRECTIVES":"%s"}\n' \ + "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM" "$HAS_GIT" "$CONSTITUTION_FILE" "$TEAM_DIRECTIVES_DIR" else echo "BRANCH_NAME: $BRANCH_NAME" echo "SPEC_FILE: $SPEC_FILE" echo "FEATURE_NUM: $FEATURE_NUM" + echo "HAS_GIT: $HAS_GIT" + if [ -n "$CONSTITUTION_FILE" ]; then + echo "CONSTITUTION: $CONSTITUTION_FILE" + else + echo "CONSTITUTION: (missing)" + fi + if [ -n "$TEAM_DIRECTIVES_DIR" ]; then + echo "TEAM_DIRECTIVES: $TEAM_DIRECTIVES_DIR" + else + echo "TEAM_DIRECTIVES: (missing)" + fi echo "SPECIFY_FEATURE environment variable set to: $BRANCH_NAME" fi diff --git a/scripts/bash/prepare-levelup.sh b/scripts/bash/prepare-levelup.sh new file mode 100755 index 0000000000..5dc2369c96 --- /dev/null +++ b/scripts/bash/prepare-levelup.sh @@ -0,0 +1,65 @@ +#!/usr/bin/env bash + +set -e + +JSON_MODE=false +ARGS=() + +for arg in "$@"; do + case "$arg" in + --json) + JSON_MODE=true + ;; + --help|-h) + echo "Usage: $0 [--json]" + echo " --json Output results in JSON format" + echo " --help Show this help message" + exit 0 + ;; + *) + ARGS+=("$arg") + ;; + esac +done + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +source "$SCRIPT_DIR/common.sh" + +eval $(get_feature_paths) + +check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1 + +FEATURE_BASENAME="$(basename "$FEATURE_DIR")" + +KNOWLEDGE_ROOT="${SPECIFY_TEAM_DIRECTIVES:-}" +if [[ -z "$KNOWLEDGE_ROOT" ]]; then + KNOWLEDGE_ROOT="$REPO_ROOT/.specify/memory/team-ai-directives" +fi + +KNOWLEDGE_DRAFTS="" +if [[ -d "$KNOWLEDGE_ROOT" ]]; then + KNOWLEDGE_DRAFTS="$KNOWLEDGE_ROOT/drafts" + mkdir -p "$KNOWLEDGE_DRAFTS" +else + KNOWLEDGE_ROOT="" +fi + +if $JSON_MODE; then + printf '{"FEATURE_DIR":"%s","BRANCH":"%s","SPEC_FILE":"%s","PLAN_FILE":"%s","TASKS_FILE":"%s","RESEARCH_FILE":"%s","QUICKSTART_FILE":"%s","KNOWLEDGE_ROOT":"%s","KNOWLEDGE_DRAFTS":"%s"}\n' \ + "$FEATURE_DIR" "$CURRENT_BRANCH" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS" "$RESEARCH" "$QUICKSTART" "$KNOWLEDGE_ROOT" "$KNOWLEDGE_DRAFTS" +else + echo "FEATURE_DIR: $FEATURE_DIR" + echo "BRANCH: $CURRENT_BRANCH" + echo "SPEC_FILE: $FEATURE_SPEC" + echo "PLAN_FILE: $IMPL_PLAN" + echo "TASKS_FILE: $TASKS" + echo "RESEARCH_FILE: $RESEARCH" + echo "QUICKSTART_FILE: $QUICKSTART" + if [[ -n "$KNOWLEDGE_ROOT" ]]; then + echo "KNOWLEDGE_ROOT: $KNOWLEDGE_ROOT" + echo "KNOWLEDGE_DRAFTS: $KNOWLEDGE_DRAFTS" + else + echo "KNOWLEDGE_ROOT: (missing)" + echo "KNOWLEDGE_DRAFTS: (missing)" + fi +fi diff --git a/scripts/bash/setup-plan.sh b/scripts/bash/setup-plan.sh old mode 100644 new mode 100755 index 654ba50d7b..a71b2a337c --- a/scripts/bash/setup-plan.sh +++ b/scripts/bash/setup-plan.sh @@ -47,14 +47,59 @@ else touch "$IMPL_PLAN" fi +CONTEXT_FILE="$FEATURE_DIR/context.md" +if [[ ! -f "$CONTEXT_FILE" ]]; then + echo "ERROR: context.md not found in $FEATURE_DIR" >&2 + echo "Fill out the feature context before running /plan." >&2 + exit 1 +fi + +if grep -q "\[NEEDS INPUT\]" "$CONTEXT_FILE"; then + echo "ERROR: context.md contains unresolved [NEEDS INPUT] markers." >&2 + echo "Please update $CONTEXT_FILE with mission, code paths, directives, research, and gateway details before proceeding." >&2 + exit 1 +fi + +# Resolve constitution and team directives paths (prefer env overrides) +CONSTITUTION_FILE="${SPECIFY_CONSTITUTION:-}" +if [[ -z "$CONSTITUTION_FILE" ]]; then + CONSTITUTION_FILE="$REPO_ROOT/.specify/memory/constitution.md" +fi +if [[ -f "$CONSTITUTION_FILE" ]]; then + export SPECIFY_CONSTITUTION="$CONSTITUTION_FILE" +else + CONSTITUTION_FILE="" +fi + +TEAM_DIRECTIVES_DIR="${SPECIFY_TEAM_DIRECTIVES:-}" +if [[ -z "$TEAM_DIRECTIVES_DIR" ]]; then + TEAM_DIRECTIVES_DIR="$REPO_ROOT/.specify/memory/team-ai-directives" +fi +if [[ -d "$TEAM_DIRECTIVES_DIR" ]]; then + export SPECIFY_TEAM_DIRECTIVES="$TEAM_DIRECTIVES_DIR" +else + TEAM_DIRECTIVES_DIR="" +fi + # Output results if $JSON_MODE; then - printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \ - "$FEATURE_SPEC" "$IMPL_PLAN" "$FEATURE_DIR" "$CURRENT_BRANCH" "$HAS_GIT" + printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s","CONSTITUTION":"%s","TEAM_DIRECTIVES":"%s","CONTEXT_FILE":"%s"}\n' \ + "$FEATURE_SPEC" "$IMPL_PLAN" "$FEATURE_DIR" "$CURRENT_BRANCH" "$HAS_GIT" "$CONSTITUTION_FILE" "$TEAM_DIRECTIVES_DIR" "$CONTEXT_FILE" else echo "FEATURE_SPEC: $FEATURE_SPEC" echo "IMPL_PLAN: $IMPL_PLAN" echo "SPECS_DIR: $FEATURE_DIR" echo "BRANCH: $CURRENT_BRANCH" echo "HAS_GIT: $HAS_GIT" + if [[ -n "$CONSTITUTION_FILE" ]]; then + echo "CONSTITUTION: $CONSTITUTION_FILE" + else + echo "CONSTITUTION: (missing)" + fi + if [[ -n "$TEAM_DIRECTIVES_DIR" ]]; then + echo "TEAM_DIRECTIVES: $TEAM_DIRECTIVES_DIR" + else + echo "TEAM_DIRECTIVES: (missing)" + fi + echo "CONTEXT_FILE: $CONTEXT_FILE" fi diff --git a/scripts/powershell/create-new-feature.ps1 b/scripts/powershell/create-new-feature.ps1 index 0f1f591275..4380fef399 100644 --- a/scripts/powershell/create-new-feature.ps1 +++ b/scripts/powershell/create-new-feature.ps1 @@ -60,6 +60,21 @@ Set-Location $repoRoot $specsDir = Join-Path $repoRoot 'specs' New-Item -ItemType Directory -Path $specsDir -Force | Out-Null +$constitutionFile = Join-Path $repoRoot '.specify/memory/constitution.md' +if (Test-Path $constitutionFile) { + $env:SPECIFY_CONSTITUTION = $constitutionFile +} else { + $constitutionFile = "" +} + +$teamDirName = 'team-ai-directives' +$teamDirectives = Join-Path $repoRoot ".specify/memory/$teamDirName" +if (Test-Path $teamDirectives) { + $env:SPECIFY_TEAM_DIRECTIVES = $teamDirectives +} else { + $teamDirectives = "" +} + $highest = 0 if (Test-Path $specsDir) { Get-ChildItem -Path $specsDir -Directory | ForEach-Object { @@ -106,6 +121,8 @@ if ($Json) { SPEC_FILE = $specFile FEATURE_NUM = $featureNum HAS_GIT = $hasGit + CONSTITUTION = $constitutionFile + TEAM_DIRECTIVES = $teamDirectives } $obj | ConvertTo-Json -Compress } else { @@ -113,5 +130,15 @@ if ($Json) { Write-Output "SPEC_FILE: $specFile" Write-Output "FEATURE_NUM: $featureNum" Write-Output "HAS_GIT: $hasGit" + if ($constitutionFile) { + Write-Output "CONSTITUTION: $constitutionFile" + } else { + Write-Output "CONSTITUTION: (missing)" + } + if ($teamDirectives) { + Write-Output "TEAM_DIRECTIVES: $teamDirectives" + } else { + Write-Output "TEAM_DIRECTIVES: (missing)" + } Write-Output "SPECIFY_FEATURE environment variable set to: $branchName" } diff --git a/scripts/powershell/prepare-levelup.ps1 b/scripts/powershell/prepare-levelup.ps1 new file mode 100644 index 0000000000..ff0dfd0b70 --- /dev/null +++ b/scripts/powershell/prepare-levelup.ps1 @@ -0,0 +1,67 @@ +#!/usr/bin/env pwsh +[CmdletBinding()] +param( + [switch]$Json, + [switch]$Help +) + +$ErrorActionPreference = 'Stop' + +if ($Help) { + Write-Output "Usage: ./prepare-levelup.ps1 [-Json] [-Help]" + Write-Output " -Json Output results in JSON format" + Write-Output " -Help Show this help message" + exit 0 +} + +. "$PSScriptRoot/common.ps1" + +$paths = Get-FeaturePathsEnv + +if (-not (Test-FeatureBranch -Branch $paths.CURRENT_BRANCH -HasGit $paths.HAS_GIT)) { + exit 1 +} + +$knowledgeRoot = $env:SPECIFY_TEAM_DIRECTIVES +if (-not $knowledgeRoot) { + $knowledgeRoot = Join-Path $paths.REPO_ROOT '.specify/memory/team-ai-directives' +} + +$knowledgeDrafts = $null +if (Test-Path $knowledgeRoot -PathType Container) { + $knowledgeDrafts = Join-Path $knowledgeRoot 'drafts' + New-Item -ItemType Directory -Path $knowledgeDrafts -Force | Out-Null +} else { + $knowledgeRoot = '' + $knowledgeDrafts = '' +} + +if ($Json) { + $result = [PSCustomObject]@{ + FEATURE_DIR = $paths.FEATURE_DIR + BRANCH = $paths.CURRENT_BRANCH + SPEC_FILE = $paths.FEATURE_SPEC + PLAN_FILE = $paths.IMPL_PLAN + TASKS_FILE = $paths.TASKS + RESEARCH_FILE = $paths.RESEARCH + QUICKSTART_FILE = $paths.QUICKSTART + KNOWLEDGE_ROOT = $knowledgeRoot + KNOWLEDGE_DRAFTS = $knowledgeDrafts + } + $result | ConvertTo-Json -Compress +} else { + Write-Output "FEATURE_DIR: $($paths.FEATURE_DIR)" + Write-Output "BRANCH: $($paths.CURRENT_BRANCH)" + Write-Output "SPEC_FILE: $($paths.FEATURE_SPEC)" + Write-Output "PLAN_FILE: $($paths.IMPL_PLAN)" + Write-Output "TASKS_FILE: $($paths.TASKS)" + Write-Output "RESEARCH_FILE: $($paths.RESEARCH)" + Write-Output "QUICKSTART_FILE: $($paths.QUICKSTART)" + if ($knowledgeRoot) { + Write-Output "KNOWLEDGE_ROOT: $knowledgeRoot" + Write-Output "KNOWLEDGE_DRAFTS: $knowledgeDrafts" + } else { + Write-Output "KNOWLEDGE_ROOT: (missing)" + Write-Output "KNOWLEDGE_DRAFTS: (missing)" + } +} diff --git a/scripts/powershell/setup-plan.ps1 b/scripts/powershell/setup-plan.ps1 index d0ed582fa9..5d71c92768 100644 --- a/scripts/powershell/setup-plan.ps1 +++ b/scripts/powershell/setup-plan.ps1 @@ -42,6 +42,26 @@ if (Test-Path $template) { New-Item -ItemType File -Path $paths.IMPL_PLAN -Force | Out-Null } +$constitutionFile = $env:SPECIFY_CONSTITUTION +if (-not $constitutionFile) { + $constitutionFile = Join-Path $paths.REPO_ROOT '.specify/memory/constitution.md' +} +if (Test-Path $constitutionFile) { + $env:SPECIFY_CONSTITUTION = $constitutionFile +} else { + $constitutionFile = '' +} + +$teamDirectives = $env:SPECIFY_TEAM_DIRECTIVES +if (-not $teamDirectives) { + $teamDirectives = Join-Path $paths.REPO_ROOT '.specify/memory/team-ai-directives' +} +if (Test-Path $teamDirectives) { + $env:SPECIFY_TEAM_DIRECTIVES = $teamDirectives +} else { + $teamDirectives = '' +} + # Output results if ($Json) { $result = [PSCustomObject]@{ @@ -50,6 +70,8 @@ if ($Json) { SPECS_DIR = $paths.FEATURE_DIR BRANCH = $paths.CURRENT_BRANCH HAS_GIT = $paths.HAS_GIT + CONSTITUTION = $constitutionFile + TEAM_DIRECTIVES = $teamDirectives } $result | ConvertTo-Json -Compress } else { @@ -58,4 +80,14 @@ if ($Json) { Write-Output "SPECS_DIR: $($paths.FEATURE_DIR)" Write-Output "BRANCH: $($paths.CURRENT_BRANCH)" Write-Output "HAS_GIT: $($paths.HAS_GIT)" + if ($constitutionFile) { + Write-Output "CONSTITUTION: $constitutionFile" + } else { + Write-Output "CONSTITUTION: (missing)" + } + if ($teamDirectives) { + Write-Output "TEAM_DIRECTIVES: $teamDirectives" + } else { + Write-Output "TEAM_DIRECTIVES: (missing)" + } } diff --git a/src/specify_cli/__init__.py b/src/specify_cli/__init__.py index 83d2fdf87f..cb495a6e91 100644 --- a/src/specify_cli/__init__.py +++ b/src/specify_cli/__init__.py @@ -53,6 +53,8 @@ ssl_context = truststore.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client = httpx.Client(verify=ssl_context) +TEAM_DIRECTIVES_DIRNAME = "team-ai-directives" + def _github_token(cli_token: str | None = None) -> str | None: """Return sanitized GitHub token (cli arg takes precedence) or None.""" return ((cli_token or os.getenv("GH_TOKEN") or os.getenv("GITHUB_TOKEN") or "").strip()) or None @@ -386,6 +388,61 @@ def check_tool(tool: str, install_hint: str) -> bool: return False +def _run_git_command(args: list[str], cwd: Path | None = None, *, env: dict[str, str] | None = None) -> subprocess.CompletedProcess: + """Run a git command with optional working directory and environment overrides.""" + cmd = ["git"] + if cwd is not None: + cmd.extend(["-C", str(cwd)]) + cmd.extend(args) + return subprocess.run(cmd, check=True, capture_output=True, text=True, env=env) + + +def sync_team_ai_directives(repo_url: str, project_root: Path, *, skip_tls: bool = False) -> str: + """Clone or update the team-ai-directives repository under .specify/memory. + + Returns a short status string describing the action performed. + """ + repo_url = (repo_url or "").strip() + if not repo_url: + raise ValueError("Team AI directives repository URL cannot be empty") + + memory_root = project_root / ".specify" / "memory" + memory_root.mkdir(parents=True, exist_ok=True) + destination = memory_root / TEAM_DIRECTIVES_DIRNAME + + git_env = os.environ.copy() + if skip_tls: + git_env["GIT_SSL_NO_VERIFY"] = "1" + + try: + if destination.exists() and any(destination.iterdir()): + _run_git_command(["rev-parse", "--is-inside-work-tree"], cwd=destination, env=git_env) + try: + existing_remote = _run_git_command([ + "config", + "--get", + "remote.origin.url", + ], cwd=destination, env=git_env).stdout.strip() + except subprocess.CalledProcessError: + existing_remote = "" + + if existing_remote and existing_remote != repo_url: + _run_git_command(["remote", "set-url", "origin", repo_url], cwd=destination, env=git_env) + + _run_git_command(["pull", "--ff-only"], cwd=destination, env=git_env) + return "updated" + + if destination.exists() and not any(destination.iterdir()): + shutil.rmtree(destination) + + memory_root.mkdir(parents=True, exist_ok=True) + _run_git_command(["clone", repo_url, str(destination)], env=git_env) + return "cloned" + except subprocess.CalledProcessError as exc: + message = exc.stderr.strip() if exc.stderr else str(exc) + raise RuntimeError(f"Git operation failed: {message}") from exc + + def is_git_repo(path: Path = None) -> bool: """Check if the specified path is inside a git repository.""" if path is None: @@ -690,7 +747,6 @@ def download_and_extract_template(project_path: Path, ai_assistant: str, script_ finally: if tracker: tracker.add("cleanup", "Remove temporary archive") - # Clean up downloaded ZIP file if zip_path.exists(): zip_path.unlink() if tracker: @@ -757,6 +813,7 @@ def init( skip_tls: bool = typer.Option(False, "--skip-tls", help="Skip SSL/TLS verification (not recommended)"), debug: bool = typer.Option(False, "--debug", help="Show verbose diagnostic output for network and extraction failures"), github_token: str = typer.Option(None, "--github-token", help="GitHub token to use for API requests (or set GH_TOKEN or GITHUB_TOKEN environment variable)"), + team_ai_directives: str = typer.Option(None, "--team-ai-directive", "--team-ai-directives", help="Clone or update a team-ai-directives repository into .specify/memory"), ): """ Initialize a new Specify project from the latest template. @@ -768,6 +825,7 @@ def init( 4. Extract the template to a new project directory or current directory 5. Initialize a fresh git repository (if not --no-git and no existing repo) 6. Optionally set up AI assistant commands + 7. Optionally clone or update a team-ai-directives repository for shared directives Examples: specify init my-project @@ -785,6 +843,7 @@ def init( specify init --here --ai codex specify init --here specify init --here --force # Skip confirmation when current directory not empty + specify init my-project --team-ai-directive https://github.com/my-org/team-ai-directives.git """ # Show banner first show_banner() @@ -830,7 +889,7 @@ def init( console.print() console.print(error_panel) raise typer.Exit(1) - + # Create formatted setup info with column alignment current_dir = Path.cwd() @@ -847,14 +906,23 @@ def init( console.print(Panel("\n".join(setup_lines), border_style="cyan", padding=(1, 2))) - # Check git only if we might need it (not --no-git) - # Only set to True if the user wants it and the tool is available + git_required_for_init = not no_git + git_required_for_directives = bool(team_ai_directives) + git_required = git_required_for_init or git_required_for_directives + git_available = True + should_init_git = False - if not no_git: - should_init_git = check_tool("git", "https://git-scm.com/downloads") - if not should_init_git: + if git_required: + git_available = check_tool("git", "https://git-scm.com/downloads") + if not git_available: + if git_required_for_directives: + console.print("[red]Error:[/red] Git is required to clone team-ai-directives. Install git or omit --team-ai-directive.") + raise typer.Exit(1) console.print("[yellow]Git not found - will skip repository initialization[/yellow]") + if git_available and git_required_for_init: + should_init_git = True + # AI assistant selection if ai_assistant: if ai_assistant not in AI_CHOICES: @@ -951,6 +1019,7 @@ def init( ("extracted-summary", "Extraction summary"), ("chmod", "Ensure scripts executable"), ("cleanup", "Cleanup"), + ("directives", "Sync team directives"), ("git", "Initialize git repository"), ("final", "Finalize") ]: @@ -960,16 +1029,36 @@ def init( with Live(tracker.render(), console=console, refresh_per_second=8, transient=True) as live: tracker.attach_refresh(lambda: live.update(tracker.render())) try: - # Create a httpx client with verify based on skip_tls verify = not skip_tls local_ssl_context = ssl_context if verify else False local_client = httpx.Client(verify=local_ssl_context) - download_and_extract_template(project_path, selected_ai, selected_script, here, verbose=False, tracker=tracker, client=local_client, debug=debug, github_token=github_token) + download_and_extract_template( + project_path, + selected_ai, + selected_script, + here, + verbose=False, + tracker=tracker, + client=local_client, + debug=debug, + github_token=github_token, + ) # Ensure scripts are executable (POSIX) ensure_executable_scripts(project_path, tracker=tracker) + if team_ai_directives and team_ai_directives.strip(): + tracker.start("directives", "syncing") + try: + directives_status = sync_team_ai_directives(team_ai_directives, project_path, skip_tls=skip_tls) + tracker.complete("directives", directives_status) + except Exception as e: + tracker.error("directives", str(e)) + raise + else: + tracker.skip("directives", "not provided") + # Git step if not no_git: tracker.start("git") @@ -1065,6 +1154,7 @@ def init( steps_lines.append(" 2.5 [cyan]/tasks[/] - Generate actionable tasks") steps_lines.append(" 2.6 [cyan]/analyze[/] - Validate alignment & surface inconsistencies (read-only)") steps_lines.append(" 2.7 [cyan]/implement[/] - Execute implementation") + steps_lines.append(" 2.8 [cyan]/levelup[/] - Capture learnings & draft knowledge assets") steps_panel = Panel("\n".join(steps_lines), title="Next Steps", border_style="cyan", padding=(1,2)) console.print() diff --git a/templates/commands/implement.md b/templates/commands/implement.md index ff2f1b699b..12822dd6c7 100644 --- a/templates/commands/implement.md +++ b/templates/commands/implement.md @@ -13,43 +13,51 @@ $ARGUMENTS 1. Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. -2. Load and analyze the implementation context: +2. Verify feature context: + - Load `context.md` from FEATURE_DIR. + - STOP if the file is missing or still contains `[NEEDS INPUT]` markers; request updated mission brief, code paths, directives, research, and gateway status before continuing. + +3. Load and analyze the implementation context: - **REQUIRED**: Read tasks.md for the complete task list and execution plan - **REQUIRED**: Read plan.md for tech stack, architecture, and file structure - **IF EXISTS**: Read data-model.md for entities and relationships - **IF EXISTS**: Read contracts/ for API specifications and test requirements - **IF EXISTS**: Read research.md for technical decisions and constraints - **IF EXISTS**: Read quickstart.md for integration scenarios + - Extract the Execution Mode for each task (`[SYNC]` or `[ASYNC]`) and ensure every task is explicitly tagged. STOP and raise an error if any tasks lack a mode. -3. Parse tasks.md structure and extract: +4. Parse tasks.md structure and extract: - **Task phases**: Setup, Tests, Core, Integration, Polish - **Task dependencies**: Sequential vs parallel execution rules - **Task details**: ID, description, file paths, parallel markers [P] - **Execution flow**: Order and dependency requirements -4. Execute implementation following the task plan: +5. Execute implementation following the task plan: - **Phase-by-phase execution**: Complete each phase before moving to the next - **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together - **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks - **File-based coordination**: Tasks affecting the same files must run sequentially - **Validation checkpoints**: Verify each phase completion before proceeding + - **Execution modes**: + * `[SYNC]` tasks → operate in tight feedback loops, narrate intent, request confirmation before and after significant changes, and capture micro-review notes. + * `[ASYNC]` tasks → may be executed autonomously but still require post-task summaries and macro-review validation against plan.md. -5. Implementation execution rules: +6. Implementation execution rules: - **Setup first**: Initialize project structure, dependencies, configuration - **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios - **Core development**: Implement models, services, CLI commands, endpoints - **Integration work**: Database connections, middleware, logging, external services - **Polish and validation**: Unit tests, performance optimization, documentation -6. Progress tracking and error handling: +7. Progress tracking and error handling: - Report progress after each completed task - Halt execution if any non-parallel task fails - For parallel tasks [P], continue with successful tasks, report failed ones - Provide clear error messages with context for debugging - Suggest next steps if implementation cannot proceed - - **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file. + - **IMPORTANT** For completed tasks, mark the task off as `[X]` in tasks.md while preserving `[SYNC]/[ASYNC]` tags and [P] markers. -7. Completion validation: +8. Completion validation: - Verify all required tasks are completed - Check that implemented features match the original specification - Validate that tests pass and coverage meets requirements diff --git a/templates/commands/levelup.md b/templates/commands/levelup.md new file mode 100644 index 0000000000..13ccd82661 --- /dev/null +++ b/templates/commands/levelup.md @@ -0,0 +1,65 @@ +--- +description: Capture learnings from a completed feature and draft a knowledge asset plus traceability summary. +scripts: + sh: scripts/bash/prepare-levelup.sh --json + ps: scripts/powershell/prepare-levelup.ps1 -Json +--- + +The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty). + +User input: + +$ARGUMENTS + +1. Run `{SCRIPT}` from the repo root and parse JSON for `FEATURE_DIR`, `BRANCH`, `SPEC_FILE`, `PLAN_FILE`, `TASKS_FILE`, `RESEARCH_FILE`, `QUICKSTART_FILE`, `KNOWLEDGE_ROOT`, and `KNOWLEDGE_DRAFTS`. All file paths must be absolute. + - If any of `SPEC_FILE`, `PLAN_FILE`, or `TASKS_FILE` are missing, STOP and instruct the developer to complete Stages 1-3 before leveling up. + - Before proceeding, analyze `TASKS_FILE` and confirm **all tasks are marked `[X]`** and no execution status indicates `[IN PROGRESS]`, `[BLOCKED]`, or other incomplete markers. If any tasks remain open or unchecked, STOP and instruct the developer to finish `/implement` first. + - If `KNOWLEDGE_ROOT` or `KNOWLEDGE_DRAFTS` are empty, STOP and direct the developer to rerun `/init --team-ai-directive ...` so the shared directives repository is cloned locally. + +2. Load the implementation artifacts: + - Read `SPEC_FILE` (feature intent and acceptance criteria). + - Read `PLAN_FILE` (execution strategy and triage decisions). + - Read `TASKS_FILE` (completed task log, including `[SYNC]/[ASYNC]` execution modes and `[X]` markers). + - IF EXISTS: Read `RESEARCH_FILE` (supporting decisions) and `QUICKSTART_FILE` (validation scenarios). + - Synthesize the key decisions, risks addressed, and noteworthy implementation patterns from these artifacts. + +3. Draft the knowledge asset: + - Determine a slug such as `{BRANCH}-levelup` and create a new markdown file under `KNOWLEDGE_DRAFTS/{slug}.md`. + - Capture: + * **Summary** of the feature and why the learning matters. + * **Reusable rule or best practice** distilled from the implementation. + * **Evidence links** back to repository files/commits and the originating issue (if available from user input). + * **Adoption guidance** (when to apply, prerequisites, caveats). + - Ensure the asset is self-contained, written in clear, prescriptive language, and references the feature branch/issue ID where relevant. + - Generate a micro-review checklist capturing which `[SYNC]` tasks were inspected and the outcome of their micro-reviews; embed this in the asset for compliance with Stage 3 documentation. + +4. Prepare review materials for the team: + - Generate a draft pull request description targeting the `team-ai-directives` repository. Include: + * Purpose of the new asset. + * Summary of analysis performed. + * Checklist of validations (spec, plan, tasks reviewed). + - Generate a draft "Trace Summary" comment for the originating issue tracker entry (`$ARGUMENTS` may specify the issue ID). Summaries should: + * Highlight key decisions. + * Link to the new knowledge asset file (relative path within the directives repo). + * Mention any follow-up actions. + +5. Present results for human approval: + - Output the path of the generated knowledge asset and its full content. + - Provide the draft pull request description and issue comment text. + - Ask the developer to confirm whether to proceed with publishing (Y/N). If "N", stop after summarizing the manual next steps (create branch, commit, open PR, comment). + +6. On developer approval, execute the publishing workflow automatically (mirroring the Stage 4 process): + - Verify the working tree at `KNOWLEDGE_ROOT` is clean. If not, report the pending changes and abort. + - Create a new branch `levelup/{slug}` in `KNOWLEDGE_ROOT` (reuse if already created in this session). + - Move/copy the asset from `KNOWLEDGE_DRAFTS/{slug}.md` into a permanent path (e.g., `knowledge/{slug}.md`) inside `KNOWLEDGE_ROOT`. + - Run `git add` for the new/updated files and commit with a message like `Add level-up asset for {BRANCH}`. + - If the repository has a configured `origin` remote and the `gh` CLI is available: + * Push the branch: `git push -u origin levelup/{slug}` + * Create a pull request via `gh pr create` populated with the drafted description (fall back to printing the command if `gh` is unavailable). + - If an issue identifier was provided, attempt to post the trace comment via `gh issue comment`; otherwise, print the comment text for manual posting. + - Report the exact commands executed and surface any failures so the developer can intervene manually. + +7. Summarize final status, including: + - Knowledge asset path and commit SHA (if created). + - Pull request URL or instructions for manual creation. + - Issue tracker comment status or manual instructions. diff --git a/templates/commands/plan.md b/templates/commands/plan.md index 32522c232d..8dffe1178c 100644 --- a/templates/commands/plan.md +++ b/templates/commands/plan.md @@ -13,18 +13,25 @@ $ARGUMENTS Given the implementation details provided as an argument, do this: -1. Run `{SCRIPT}` from the repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. All future file paths must be absolute. - - BEFORE proceeding, inspect FEATURE_SPEC for a `## Clarifications` section with at least one `Session` subheading. If missing or clearly ambiguous areas remain (vague adjectives, unresolved critical choices), PAUSE and instruct the user to run `/clarify` first to reduce rework. Only continue if: (a) Clarifications exist OR (b) an explicit user override is provided (e.g., "proceed without clarification"). Do not attempt to fabricate clarifications yourself. -2. Read and analyze the feature specification to understand: +1. Run `{SCRIPT}` from the repo root and parse JSON for `FEATURE_SPEC`, `IMPL_PLAN`, `SPECS_DIR`, `BRANCH`, `HAS_GIT`, `CONSTITUTION`, `TEAM_DIRECTIVES`, and `CONTEXT_FILE`. All future file paths must be absolute. + - BEFORE proceeding, inspect `FEATURE_SPEC` for a `## Clarifications` section with at least one `Session` subheading. If missing or clearly ambiguous areas remain (vague adjectives, unresolved critical choices), PAUSE and instruct the user to run `/clarify` first to reduce rework. Only continue if: (a) Clarifications exist OR (b) an explicit user override is provided (e.g., "proceed without clarification"). Do not attempt to fabricate clarifications yourself. + - If `CONSTITUTION` is empty or the file does not exist, STOP and instruct the developer to run `/constitution` (Stage 0) before continuing—Stage 2 requires established principles. + - If `TEAM_DIRECTIVES` is missing, warn the developer that shared directives cannot be referenced; note any prompts pointing to `@team/...` modules and request guidance. +2. Load and validate the feature context at `CONTEXT_FILE`: + - STOP immediately if the file is missing. + - Scan for any remaining `[NEEDS INPUT]` markers; instruct the developer to populate them before proceeding. + - Summarize key insights (mission brief, relevant code, directives, gateway status) for later reference. + +3. Read and analyze the feature specification to understand: - The feature requirements and user stories - Functional and non-functional requirements - Success criteria and acceptance criteria - Any technical constraints or dependencies mentioned -3. Read the constitution at `/memory/constitution.md` to understand constitutional requirements. +4. Read the constitution using the absolute path from `CONSTITUTION` to understand non-negotiable requirements and gates. -4. Execute the implementation plan template: - - Load `/templates/plan-template.md` (already copied to IMPL_PLAN path) +5. Execute the implementation plan template: + - Load `/.specify/templates/plan-template.md` (already copied to IMPL_PLAN path) - Set Input path to FEATURE_SPEC - Run the Execution Flow (main) function steps 1-9 - The template is self-contained and executable @@ -33,14 +40,16 @@ Given the implementation details provided as an argument, do this: * Phase 0 generates research.md * Phase 1 generates data-model.md, contracts/, quickstart.md * Phase 2 generates tasks.md + - If `TEAM_DIRECTIVES` was available, resolve any referenced modules (e.g., `@team/context_modules/...`) and integrate their guidance. - Incorporate user-provided details from arguments into Technical Context: {ARGS} + - Populate the "Triage Overview" table with preliminary `[SYNC]`/`[ASYNC]` suggestions per major step, updating the rationale as you complete each phase. - Update Progress Tracking as you complete each phase -5. Verify execution completed: +6. Verify execution completed: - Check Progress Tracking shows all phases complete - Ensure all required artifacts were generated - Confirm no ERROR states in execution -6. Report results with branch name, file paths, and generated artifacts. +7. Report results with branch name, file paths, generated artifacts, and a reminder that `context.md` must remain up to date for `/tasks` and `/implement`. Use absolute paths with the repository root for all file operations to avoid path issues. diff --git a/templates/commands/specify.md b/templates/commands/specify.md index 652c86a279..558ccee2e8 100644 --- a/templates/commands/specify.md +++ b/templates/commands/specify.md @@ -15,10 +15,19 @@ The text the user typed after `/specify` in the triggering message **is** the fe Given that feature description, do this: -1. Run the script `{SCRIPT}` from repo root and parse its JSON output for BRANCH_NAME and SPEC_FILE. All file paths must be absolute. - **IMPORTANT** You must only ever run this script once. The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for. -2. Load `templates/spec-template.md` to understand required sections. -3. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings. -4. Report completion with branch name, spec file path, and readiness for the next phase. +1. Run the script `{SCRIPT}` from repo root and parse its JSON output for `BRANCH_NAME`, `SPEC_FILE`, `FEATURE_NUM`, `HAS_GIT`, `CONSTITUTION`, and `TEAM_DIRECTIVES`. All file paths must be absolute. + - **IMPORTANT**: Run this script exactly once. Reuse the JSON it prints for all subsequent steps. + - If `CONSTITUTION` is empty or missing, STOP and instruct the developer to run `/constitution` before proceeding—Stage 1 cannot continue without the foundational principles. + - If `TEAM_DIRECTIVES` is missing, warn the developer that shared directives are unavailable and note that `@team/...` references will not resolve. +2. Load `templates/spec-template.md` to understand required sections. Review `templates/context-template.md` so you can highlight which fields the developer must fill before planning. +3. Extract the canonical issue identifier: + - Scan `$ARGUMENTS` (and any referenced content) for `@issue-tracker` tokens and capture the latest `{ISSUE-ID}` reference. + - If no issue ID is present, leave the placeholder as `[NEEDS CLARIFICATION: issue reference not provided]` and surface a warning to the developer. + - If an issue ID is found, replace the `**Issue Tracker**` line in the template with `**Issue Tracker**: @issue-tracker {ISSUE-ID}` (preserve additional context if multiple IDs are relevant). +4. Read the constitution at `CONSTITUTION` and treat its non-negotiable principles as guardrails when drafting the specification. +5. When the directive references artifacts like `@team/context_modules/...`, resolve them to files beneath `TEAM_DIRECTIVES`. Load each referenced module to ground the specification; if a referenced file is absent, pause and ask the developer for guidance before continuing. +6. Write the specification to `SPEC_FILE` using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings. Ensure the `**Issue Tracker**` line is populated as described above. +7. Seed `context.md` in the feature directory (already created by the script) with any information you can auto-fill (issue IDs, summary snippets) and clearly call out remaining `[NEEDS INPUT]` markers the developer must resolve before running `/plan`. +8. Report completion with branch name, spec file path, linked issue ID (if any), the absolute path to `context.md`, and readiness for the next phase. Note: The script creates and checks out the new branch and initializes the spec file before writing. diff --git a/templates/commands/tasks.md b/templates/commands/tasks.md index eb0ef2b60b..b4f9dc288c 100644 --- a/templates/commands/tasks.md +++ b/templates/commands/tasks.md @@ -12,19 +12,23 @@ User input: $ARGUMENTS 1. Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. -2. Load and analyze available design documents: +2. Load and validate `context.md` from the feature directory: + - STOP if the file is missing or contains `[NEEDS INPUT]` markers. + - Capture mission highlights, relevant code paths, directives, and gateway status for downstream reasoning. +3. Load and analyze available design documents: - Always read plan.md for tech stack and libraries - IF EXISTS: Read data-model.md for entities - IF EXISTS: Read contracts/ for API endpoints - IF EXISTS: Read research.md for technical decisions - IF EXISTS: Read quickstart.md for test scenarios + - Capture the finalized `[SYNC]`/`[ASYNC]` assignments from the plan's **Triage Overview** and apply them to generated tasks. Note: Not all projects have all documents. For example: - CLI tools might not have contracts/ - Simple libraries might not need data-model.md - Generate tasks based on what's available -3. Generate tasks following the template: +4. Generate tasks following the template: - Use `/templates/tasks-template.md` as the base - Replace example tasks with actual tasks based on: * **Setup tasks**: Project init, dependencies, linting @@ -32,16 +36,18 @@ $ARGUMENTS * **Core tasks**: One per entity, service, CLI command, endpoint * **Integration tasks**: DB connections, middleware, logging * **Polish tasks [P]**: Unit tests, performance, docs + - For every task, append the Execution Mode tag `[SYNC]` or `[ASYNC]` as dictated by the plan. Never invent a mode—ask the developer when absent. -4. Task generation rules: +5. Task generation rules: - Each contract file → contract test task marked [P] - Each entity in data-model → model creation task marked [P] - Each endpoint → implementation task (not parallel if shared files) - Each user story → integration test marked [P] - Different files = can be parallel [P] - Same file = sequential (no [P]) + - Preserve the Execution Mode from the plan so downstream tooling can enforce SYNC vs ASYNC workflows. -5. Order tasks by dependencies: +6. Order tasks by dependencies: - Setup before everything - Tests before implementation (TDD) - Models before services @@ -49,11 +55,11 @@ $ARGUMENTS - Core before integration - Everything before polish -6. Include parallel execution examples: +7. Include parallel execution examples: - Group [P] tasks that can run together - Show actual Task agent commands -7. Create FEATURE_DIR/tasks.md with: +8. Create FEATURE_DIR/tasks.md with: - Correct feature name from implementation plan - Numbered tasks (T001, T002, etc.) - Clear file paths for each task diff --git a/templates/plan-template.md b/templates/plan-template.md index e812b41260..48615ed8a5 100644 --- a/templates/plan-template.md +++ b/templates/plan-template.md @@ -39,6 +39,13 @@ scripts: ## Summary [Extract from feature spec: primary requirement + technical approach from research] +## Triage Overview +*For each major plan step below, assign an execution mode and rationale. Execution mode must be either `[SYNC]` (close human supervision) or `[ASYNC]` (delegated to an autonomous loop).* + +| Step | Execution Mode | Rationale | +|------|----------------|-----------| +| e.g., Phase 1 – API design | [ASYNC] | Well-defined schemas, low architectural risk | + ## Technical Context **Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION] **Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION] @@ -171,6 +178,7 @@ ios/ or android/ - Each entity → model creation task [P] - Each user story → integration test task - Implementation tasks to make tests pass +- Carry forward the Execution Mode from the Triage Overview: annotate each generated task with `[SYNC]` or `[ASYNC]` based on the finalized plan decision. **Ordering Strategy**: - TDD order: Tests before implementation diff --git a/templates/tasks-template.md b/templates/tasks-template.md index b8a28fafd5..0fc4558c19 100644 --- a/templates/tasks-template.md +++ b/templates/tasks-template.md @@ -32,7 +32,8 @@ 9. Return: SUCCESS (tasks ready for execution) ``` -## Format: `[ID] [P?] Description` +## Format: `[ID] [MODE] [P?] Description` +- **[MODE]**: Execution state from the plan (`[SYNC]` or `[ASYNC]`) - **[P]**: Can run in parallel (different files, no dependencies) - Include exact file paths in descriptions @@ -43,38 +44,38 @@ - Paths shown below assume single project - adjust based on plan.md structure ## Phase 3.1: Setup -- [ ] T001 Create project structure per implementation plan -- [ ] T002 Initialize [language] project with [framework] dependencies -- [ ] T003 [P] Configure linting and formatting tools +- [ ] T001 [SYNC] Create project structure per implementation plan +- [ ] T002 [SYNC] Initialize [language] project with [framework] dependencies +- [ ] T003 [ASYNC] [P] Configure linting and formatting tools ## Phase 3.2: Tests First (TDD) ⚠️ MUST COMPLETE BEFORE 3.3 **CRITICAL: These tests MUST be written and MUST FAIL before ANY implementation** -- [ ] T004 [P] Contract test POST /api/users in tests/contract/test_users_post.py -- [ ] T005 [P] Contract test GET /api/users/{id} in tests/contract/test_users_get.py -- [ ] T006 [P] Integration test user registration in tests/integration/test_registration.py -- [ ] T007 [P] Integration test auth flow in tests/integration/test_auth.py +- [ ] T004 [SYNC] [P] Contract test POST /api/users in tests/contract/test_users_post.py +- [ ] T005 [SYNC] [P] Contract test GET /api/users/{id} in tests/contract/test_users_get.py +- [ ] T006 [SYNC] [P] Integration test user registration in tests/integration/test_registration.py +- [ ] T007 [SYNC] [P] Integration test auth flow in tests/integration/test_auth.py ## Phase 3.3: Core Implementation (ONLY after tests are failing) -- [ ] T008 [P] User model in src/models/user.py -- [ ] T009 [P] UserService CRUD in src/services/user_service.py -- [ ] T010 [P] CLI --create-user in src/cli/user_commands.py -- [ ] T011 POST /api/users endpoint -- [ ] T012 GET /api/users/{id} endpoint -- [ ] T013 Input validation -- [ ] T014 Error handling and logging +- [ ] T008 [ASYNC] [P] User model in src/models/user.py +- [ ] T009 [ASYNC] [P] UserService CRUD in src/services/user_service.py +- [ ] T010 [ASYNC] [P] CLI --create-user in src/cli/user_commands.py +- [ ] T011 [SYNC] POST /api/users endpoint +- [ ] T012 [SYNC] GET /api/users/{id} endpoint +- [ ] T013 [SYNC] Input validation +- [ ] T014 [SYNC] Error handling and logging ## Phase 3.4: Integration -- [ ] T015 Connect UserService to DB -- [ ] T016 Auth middleware -- [ ] T017 Request/response logging -- [ ] T018 CORS and security headers +- [ ] T015 [ASYNC] Connect UserService to DB +- [ ] T016 [SYNC] Auth middleware +- [ ] T017 [ASYNC] Request/response logging +- [ ] T018 [SYNC] CORS and security headers ## Phase 3.5: Polish -- [ ] T019 [P] Unit tests for validation in tests/unit/test_validation.py -- [ ] T020 Performance tests (<200ms) -- [ ] T021 [P] Update docs/api.md -- [ ] T022 Remove duplication -- [ ] T023 Run manual-testing.md +- [ ] T019 [ASYNC] [P] Unit tests for validation in tests/unit/test_validation.py +- [ ] T020 [SYNC] Performance tests (<200ms) +- [ ] T021 [ASYNC] [P] Update docs/api.md +- [ ] T022 [ASYNC] Remove duplication +- [ ] T023 [SYNC] Run manual-testing.md ## Dependencies - Tests (T004-T007) before implementation (T008-T014) @@ -93,6 +94,7 @@ Task: "Integration test auth in tests/integration/test_auth.py" ## Notes - [P] tasks = different files, no dependencies +- `[SYNC]` tasks require hands-on micro-review and pairing; `[ASYNC]` tasks can be delegated but still require macro-review before commit - Verify tests fail before implementing - Commit after each task - Avoid: vague tasks, same file conflicts diff --git a/tests/test_create_new_feature.py b/tests/test_create_new_feature.py new file mode 100644 index 0000000000..980cffe259 --- /dev/null +++ b/tests/test_create_new_feature.py @@ -0,0 +1,51 @@ +import json +import shutil +import subprocess +from pathlib import Path + + +def test_create_new_feature_outputs_context_paths(tmp_path): + repo_root = tmp_path / "repo" + script_dir = repo_root / "scripts" / "bash" + template_dir = repo_root / ".specify" / "templates" + memory_dir = repo_root / ".specify" / "memory" + + script_dir.mkdir(parents=True) + template_dir.mkdir(parents=True) + memory_dir.mkdir(parents=True) + + project_root = Path(__file__).resolve().parent.parent + top_level_root = project_root.parent + + shutil.copy(project_root / "scripts" / "bash" / "create-new-feature.sh", script_dir / "create-new-feature.sh") + shutil.copy(top_level_root / ".specify" / "templates" / "spec-template.md", template_dir / "spec-template.md") + + constitution_path = memory_dir / "constitution.md" + constitution_path.write_text("Principles") + + team_root = memory_dir / "team-ai-directives" + (team_root / "context_modules").mkdir(parents=True) + (team_root / "context_modules" / "principles.md").write_text("Rule") + + script_path = script_dir / "create-new-feature.sh" + + result = subprocess.run( + ["bash", str(script_path), "--json", "Add stage one feature"], + cwd=repo_root, + check=True, + capture_output=True, + text=True, + ) + + data = json.loads(result.stdout.strip()) + + spec_file = Path(data["SPEC_FILE"]) + assert spec_file.exists() + context_file = spec_file.parent / "context.md" + assert context_file.exists() + context_text = context_file.read_text() + assert "[NEEDS INPUT]" in context_text + assert "Feature Context" in context_text + assert data["CONSTITUTION"] == str(constitution_path) + assert data["TEAM_DIRECTIVES"] == str(team_root) + assert data["HAS_GIT"] == "false" diff --git a/tests/test_prepare_levelup.py b/tests/test_prepare_levelup.py new file mode 100644 index 0000000000..3cde26fd58 --- /dev/null +++ b/tests/test_prepare_levelup.py @@ -0,0 +1,57 @@ +import json +import shutil +import subprocess +from pathlib import Path + + +def test_prepare_levelup_outputs_context(tmp_path, monkeypatch): + repo_root = tmp_path / "repo" + (repo_root / "scripts" / "bash").mkdir(parents=True) + (repo_root / ".specify" / "templates").mkdir(parents=True) + (repo_root / ".specify" / "memory").mkdir(parents=True) + + project_root = Path(__file__).resolve().parent.parent + top_level_root = project_root.parent + + # Copy scripts + shutil.copy(project_root / "scripts" / "bash" / "prepare-levelup.sh", repo_root / "scripts" / "bash" / "prepare-levelup.sh") + shutil.copy(project_root / "scripts" / "bash" / "common.sh", repo_root / "scripts" / "bash" / "common.sh") + + # Constitution and directives + memory_root = repo_root / ".specify" / "memory" + (memory_root / "team-ai-directives").mkdir(parents=True) + (memory_root / "team-ai-directives" / "context_modules").mkdir(parents=True) + (memory_root / "team-ai-directives" / "drafts").mkdir(parents=True) + (memory_root / "constitution.md").write_text("Principles") + + # Feature structure + feature_dir = repo_root / "specs" / "001-levelup-test" + feature_dir.mkdir(parents=True) + (feature_dir / "spec.md").write_text("# Spec") + (feature_dir / "plan.md").write_text("# Plan") + (feature_dir / "tasks.md").write_text("# Tasks") + (feature_dir / "research.md").write_text("# Research") + (feature_dir / "quickstart.md").write_text("# Quickstart") + (feature_dir / "context.md").write_text("# Feature Context\n- filled") + + monkeypatch.setenv("SPECIFY_FEATURE", "001-levelup-test") + + subprocess.run(["git", "init"], cwd=repo_root, check=True, capture_output=True, text=True) + + result = subprocess.run( + ["bash", str(repo_root / "scripts" / "bash" / "prepare-levelup.sh"), "--json"], + cwd=repo_root, + check=True, + capture_output=True, + text=True, + ) + + json_line = next((line for line in result.stdout.splitlines()[::-1] if line.strip().startswith("{")), "") + assert json_line, "Expected JSON output" + data = json.loads(json_line) + + assert data["SPEC_FILE"].endswith("specs/001-levelup-test/spec.md") + assert data["PLAN_FILE"].endswith("specs/001-levelup-test/plan.md") + assert data["TASKS_FILE"].endswith("specs/001-levelup-test/tasks.md") + assert data["KNOWLEDGE_ROOT"].endswith(".specify/memory/team-ai-directives") + assert data["KNOWLEDGE_DRAFTS"].endswith(".specify/memory/team-ai-directives/drafts") diff --git a/tests/test_setup_plan.py b/tests/test_setup_plan.py new file mode 100644 index 0000000000..9050f8a208 --- /dev/null +++ b/tests/test_setup_plan.py @@ -0,0 +1,61 @@ +import json +import shutil +import subprocess +from pathlib import Path + + +def test_setup_plan_outputs_context_paths(tmp_path, monkeypatch): + repo_root = tmp_path / "repo" + script_dir = repo_root / "scripts" / "bash" + script_dir.mkdir(parents=True) + + specify_dir = repo_root / ".specify" + templates_dir = specify_dir / "templates" + memory_dir = specify_dir / "memory" + templates_dir.mkdir(parents=True) + memory_dir.mkdir(parents=True) + + # Copy required scripts and template + project_root = Path(__file__).resolve().parent.parent + shutil.copy(project_root / "scripts" / "bash" / "setup-plan.sh", script_dir / "setup-plan.sh") + shutil.copy(project_root / "scripts" / "bash" / "common.sh", script_dir / "common.sh") + shutil.copy(Path(__file__).resolve().parent.parent.parent / ".specify" / "templates" / "plan-template.md", templates_dir / "plan-template.md") + + # Seed constitution and team directives + constitution = memory_dir / "constitution.md" + constitution.write_text("Principles") + team_directives = memory_dir / "team-ai-directives" + (team_directives / "context_modules").mkdir(parents=True) + + # Seed feature spec directory + feature_dir = repo_root / "specs" / "001-test-feature" + feature_dir.mkdir(parents=True) + (feature_dir / "spec.md").write_text("# Spec") + (feature_dir / "context.md").write_text("""# Feature Context\n- filled""") + + # Prefer SPECIFY_FEATURE to avoid git dependency + monkeypatch.setenv("SPECIFY_FEATURE", "001-test-feature") + + subprocess.run( + ["git", "init"], + cwd=repo_root, + check=True, + capture_output=True, + text=True, + ) + + result = subprocess.run( + ["bash", str(script_dir / "setup-plan.sh"), "--json"], + cwd=repo_root, + check=True, + capture_output=True, + text=True, + ) + + json_line = next((line for line in result.stdout.splitlines()[::-1] if line.strip().startswith("{")), "") + assert json_line, "Expected JSON output from setup-plan script" + data = json.loads(json_line) + assert data["FEATURE_SPEC"].endswith("specs/001-test-feature/spec.md") + assert data["CONSTITUTION"] == str(constitution) + assert data["TEAM_DIRECTIVES"] == str(team_directives) + assert data["CONTEXT_FILE"] == str(feature_dir / "context.md") diff --git a/tests/test_team_directives.py b/tests/test_team_directives.py new file mode 100644 index 0000000000..aa52fc8818 --- /dev/null +++ b/tests/test_team_directives.py @@ -0,0 +1,83 @@ +import subprocess + +import pytest + +from specify_cli import sync_team_ai_directives, TEAM_DIRECTIVES_DIRNAME + + +def _completed(stdout: str = "", stderr: str = ""): + return subprocess.CompletedProcess(args=[], returncode=0, stdout=stdout, stderr=stderr) + + +def test_sync_clones_when_missing(tmp_path, monkeypatch): + calls = [] + + def fake_run(cmd, check, capture_output, text, env=None): + calls.append((cmd, env)) + return _completed() + + monkeypatch.setattr(subprocess, "run", fake_run) + + status = sync_team_ai_directives("https://example.com/repo.git", tmp_path, skip_tls=True) + + assert status == "cloned" + memory_root = tmp_path / ".specify" / "memory" + assert memory_root.exists() + assert calls[0][0][:2] == ["git", "clone"] + assert calls[0][0][2] == "https://example.com/repo.git" + assert calls[0][1]["GIT_SSL_NO_VERIFY"] == "1" + + +def test_sync_updates_existing_repo(tmp_path, monkeypatch): + destination = tmp_path / ".specify" / "memory" / TEAM_DIRECTIVES_DIRNAME + (destination / ".git").mkdir(parents=True) + + commands = [] + + def fake_run(cmd, check, capture_output, text, env=None): + commands.append(cmd) + if "config" in cmd: + return _completed("https://example.com/repo.git\n") + return _completed() + + monkeypatch.setattr(subprocess, "run", fake_run) + + status = sync_team_ai_directives("https://example.com/repo.git", tmp_path) + + assert status == "updated" + assert any(item[3] == "pull" for item in commands if len(item) > 3) + assert commands[0][:4] == ["git", "-C", str(destination), "rev-parse"] + + +def test_sync_resets_remote_when_url_changes(tmp_path, monkeypatch): + destination = tmp_path / ".specify" / "memory" / TEAM_DIRECTIVES_DIRNAME + (destination / ".git").mkdir(parents=True) + + commands = [] + + def fake_run(cmd, check, capture_output, text, env=None): + commands.append(cmd) + if "config" in cmd: + return _completed("https://old.example.com/repo.git\n") + return _completed() + + monkeypatch.setattr(subprocess, "run", fake_run) + + sync_team_ai_directives("https://new.example.com/repo.git", tmp_path) + + assert any( + item[:5] == ["git", "-C", str(destination), "remote", "set-url"] + for item in commands + ) + + +def test_sync_raises_on_git_failure(tmp_path, monkeypatch): + def fake_run(cmd, check, capture_output, text, env=None): + raise subprocess.CalledProcessError(returncode=1, cmd=cmd, stderr="fatal: error") + + monkeypatch.setattr(subprocess, "run", fake_run) + + with pytest.raises(RuntimeError) as exc: + sync_team_ai_directives("https://example.com/repo.git", tmp_path) + + assert "fatal: error" in str(exc.value) From b30e3a8584c7d312d26a44704b9595bdaa27e512 Mon Sep 17 00:00:00 2001 From: Lior Kanfi Date: Wed, 1 Oct 2025 17:14:56 +0300 Subject: [PATCH 02/95] update download template repo --- src/specify_cli/__init__.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/specify_cli/__init__.py b/src/specify_cli/__init__.py index ac87e3eea8..2695e364c2 100644 --- a/src/specify_cli/__init__.py +++ b/src/specify_cli/__init__.py @@ -491,8 +491,8 @@ def init_git_repo(project_path: Path, quiet: bool = False) -> bool: def download_template_from_github(ai_assistant: str, download_dir: Path, *, script_type: str = "sh", verbose: bool = True, show_progress: bool = True, client: httpx.Client = None, debug: bool = False, github_token: str = None) -> Tuple[Path, dict]: - repo_owner = "github" - repo_name = "spec-kit" + repo_owner = "tikalk" + repo_name = "agentic-sdlc-spec-kit" if client is None: client = httpx.Client(verify=ssl_context) From 76559133c4ae9c0325ebe646e762ff0c6f3e4cde Mon Sep 17 00:00:00 2001 From: Lior Kanfi Date: Wed, 1 Oct 2025 17:24:29 +0300 Subject: [PATCH 03/95] add agentic-sdlc prefix --- .../scripts/create-github-release.sh | 46 +++++++++---------- .../scripts/create-release-packages.sh | 6 +-- AGENTS.md | 4 +- docs/docfx.json | 8 ++-- src/specify_cli/__init__.py | 2 +- 5 files changed, 33 insertions(+), 33 deletions(-) diff --git a/.github/workflows/scripts/create-github-release.sh b/.github/workflows/scripts/create-github-release.sh index 0257520f57..d4c6d1a65b 100644 --- a/.github/workflows/scripts/create-github-release.sh +++ b/.github/workflows/scripts/create-github-release.sh @@ -16,27 +16,27 @@ VERSION="$1" VERSION_NO_V=${VERSION#v} gh release create "$VERSION" \ - .genreleases/spec-kit-template-copilot-sh-"$VERSION".zip \ - .genreleases/spec-kit-template-copilot-ps-"$VERSION".zip \ - .genreleases/spec-kit-template-claude-sh-"$VERSION".zip \ - .genreleases/spec-kit-template-claude-ps-"$VERSION".zip \ - .genreleases/spec-kit-template-gemini-sh-"$VERSION".zip \ - .genreleases/spec-kit-template-gemini-ps-"$VERSION".zip \ - .genreleases/spec-kit-template-cursor-sh-"$VERSION".zip \ - .genreleases/spec-kit-template-cursor-ps-"$VERSION".zip \ - .genreleases/spec-kit-template-opencode-sh-"$VERSION".zip \ - .genreleases/spec-kit-template-opencode-ps-"$VERSION".zip \ - .genreleases/spec-kit-template-qwen-sh-"$VERSION".zip \ - .genreleases/spec-kit-template-qwen-ps-"$VERSION".zip \ - .genreleases/spec-kit-template-windsurf-sh-"$VERSION".zip \ - .genreleases/spec-kit-template-windsurf-ps-"$VERSION".zip \ - .genreleases/spec-kit-template-codex-sh-"$VERSION".zip \ - .genreleases/spec-kit-template-codex-ps-"$VERSION".zip \ - .genreleases/spec-kit-template-kilocode-sh-"$VERSION".zip \ - .genreleases/spec-kit-template-kilocode-ps-"$VERSION".zip \ - .genreleases/spec-kit-template-auggie-sh-"$VERSION".zip \ - .genreleases/spec-kit-template-auggie-ps-"$VERSION".zip \ - .genreleases/spec-kit-template-roo-sh-"$VERSION".zip \ - .genreleases/spec-kit-template-roo-ps-"$VERSION".zip \ - --title "Spec Kit Templates - $VERSION_NO_V" \ + .genreleases/agentic-sdlc-spec-kit-template-copilot-sh-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-copilot-ps-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-claude-sh-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-claude-ps-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-gemini-sh-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-gemini-ps-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-cursor-sh-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-cursor-ps-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-opencode-sh-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-opencode-ps-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-qwen-sh-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-qwen-ps-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-windsurf-sh-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-windsurf-ps-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-codex-sh-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-codex-ps-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-kilocode-sh-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-kilocode-ps-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-auggie-sh-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-auggie-ps-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-roo-sh-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-roo-ps-"$VERSION".zip \ + --title "Agentic SDLC Spec Kit Templates - $VERSION_NO_V" \ --notes-file release_notes.md \ No newline at end of file diff --git a/.github/workflows/scripts/create-release-packages.sh b/.github/workflows/scripts/create-release-packages.sh index 1a12e55823..f30014e673 100755 --- a/.github/workflows/scripts/create-release-packages.sh +++ b/.github/workflows/scripts/create-release-packages.sh @@ -173,8 +173,8 @@ build_variant() { mkdir -p "$base_dir/.roo/commands" generate_commands roo md "\$ARGUMENTS" "$base_dir/.roo/commands" "$script" ;; esac - ( cd "$base_dir" && zip -r "../spec-kit-template-${agent}-${script}-${NEW_VERSION}.zip" . ) - echo "Created $GENRELEASES_DIR/spec-kit-template-${agent}-${script}-${NEW_VERSION}.zip" + ( cd "$base_dir" && zip -r "../agentic-sdlc-spec-kit-template-${agent}-${script}-${NEW_VERSION}.zip" . ) + echo "Created $GENRELEASES_DIR/agentic-sdlc-spec-kit-template-${agent}-${script}-${NEW_VERSION}.zip" } # Determine agent list @@ -225,4 +225,4 @@ for agent in "${AGENT_LIST[@]}"; do done echo "Archives in $GENRELEASES_DIR:" -ls -1 "$GENRELEASES_DIR"/spec-kit-template-*-"${NEW_VERSION}".zip +ls -1 "$GENRELEASES_DIR"/agentic-sdlc-spec-kit-template-*-"${NEW_VERSION}".zip diff --git a/AGENTS.md b/AGENTS.md index 59b9956681..e31a58971c 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -119,8 +119,8 @@ Modify `.github/workflows/scripts/create-github-release.sh` to include the new a ```bash gh release create "$VERSION" \ # ... existing packages ... - .genreleases/spec-kit-template-windsurf-sh-"$VERSION".zip \ - .genreleases/spec-kit-template-windsurf-ps-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-windsurf-sh-"$VERSION".zip \ + .genreleases/agentic-sdlc-spec-kit-template-windsurf-ps-"$VERSION".zip \ # Add new agent packages here ``` diff --git a/docs/docfx.json b/docs/docfx.json index c59dedbe2e..2f4b8287be 100644 --- a/docs/docfx.json +++ b/docs/docfx.json @@ -56,13 +56,13 @@ "cleanupCacheHistory": false, "disableGitFeatures": false, "globalMetadata": { - "_appTitle": "Spec Kit Documentation", - "_appName": "Spec Kit", - "_appFooter": "Spec Kit - A specification-driven development toolkit", + "_appTitle": "Agentic SDLC Spec Kit Documentation", + "_appName": "Agentic SDLC Spec Kit", + "_appFooter": "Agentic SDLC Spec Kit - A specification-driven development toolkit", "_enableSearch": true, "_disableContribution": false, "_gitContribute": { - "repo": "https://github.com/github/spec-kit", + "repo": "https://github.com/tikalk/agentic-sdlc-spec-kit", "branch": "main" } } diff --git a/src/specify_cli/__init__.py b/src/specify_cli/__init__.py index 2695e364c2..c53bf219ca 100644 --- a/src/specify_cli/__init__.py +++ b/src/specify_cli/__init__.py @@ -524,7 +524,7 @@ def download_template_from_github(ai_assistant: str, download_dir: Path, *, scri # Find the template asset for the specified AI assistant assets = release_data.get("assets", []) - pattern = f"spec-kit-template-{ai_assistant}-{script_type}" + pattern = f"agentic-sdlc-spec-kit-template-{ai_assistant}-{script_type}" matching_assets = [ asset for asset in assets if pattern in asset["name"] and asset["name"].endswith(".zip") From b90ed1659e308ae2e13001f3225a648a01c96943 Mon Sep 17 00:00:00 2001 From: Lior Kanfi Date: Fri, 3 Oct 2025 14:18:37 +0300 Subject: [PATCH 04/95] support gateway_url --- scripts/bash/common.sh | 30 +++++++++++++++++++ src/specify_cli/__init__.py | 59 +++++++++++++++++++++++++++++++++++++ 2 files changed, 89 insertions(+) diff --git a/scripts/bash/common.sh b/scripts/bash/common.sh index aebebbc54d..1bfb8bdb9a 100644 --- a/scripts/bash/common.sh +++ b/scripts/bash/common.sh @@ -1,6 +1,35 @@ #!/usr/bin/env bash # Common functions and variables for all scripts +# Load gateway configuration and export helper environment variables +load_gateway_config() { + local repo_root="$1" + local config_dir="$repo_root/.specify/config" + local env_file="$config_dir/gateway.env" + + if [[ -f "$env_file" ]]; then + # shellcheck disable=SC1090 + source "$env_file" + fi + + if [[ -n "${SPECIFY_GATEWAY_URL:-}" ]]; then + export SPECIFY_GATEWAY_URL + export SPECIFY_GATEWAY_ACTIVE="true" + [[ -z "${ANTHROPIC_BASE_URL:-}" ]] && export ANTHROPIC_BASE_URL="$SPECIFY_GATEWAY_URL" + [[ -z "${GEMINI_BASE_URL:-}" ]] && export GEMINI_BASE_URL="$SPECIFY_GATEWAY_URL" + [[ -z "${OPENAI_BASE_URL:-}" ]] && export OPENAI_BASE_URL="$SPECIFY_GATEWAY_URL" + else + export SPECIFY_GATEWAY_ACTIVE="false" + if [[ -z "${SPECIFY_SUPPRESS_GATEWAY_WARNING:-}" ]]; then + echo "[specify] Warning: Gateway URL not configured. Set SPECIFY_GATEWAY_URL in .specify/config/gateway.env." >&2 + fi + fi + + if [[ -n "${SPECIFY_GATEWAY_TOKEN:-}" ]]; then + export SPECIFY_GATEWAY_TOKEN + fi +} + # Get repository root, with fallback for non-git repositories get_repo_root() { if git rev-parse --show-toplevel >/dev/null 2>&1; then @@ -85,6 +114,7 @@ get_feature_dir() { echo "$1/specs/$2"; } get_feature_paths() { local repo_root=$(get_repo_root) + load_gateway_config "$repo_root" local current_branch=$(get_current_branch) local has_git_repo="false" diff --git a/src/specify_cli/__init__.py b/src/specify_cli/__init__.py index c53bf219ca..4c03ed5ec6 100644 --- a/src/specify_cli/__init__.py +++ b/src/specify_cli/__init__.py @@ -759,6 +759,53 @@ def download_and_extract_template(project_path: Path, ai_assistant: str, script_ return project_path +def ensure_gateway_config( + project_path: Path, + selected_ai: str, + *, + tracker: StepTracker | None = None, + gateway_url: str | None = None, + gateway_token: str | None = None, + suppress_warning: bool | None = None, +) -> None: + """Create gateway config template and optionally auto-write base URL env guidance.""" + config_dir = project_path / ".specify" / "config" + config_dir.mkdir(parents=True, exist_ok=True) + env_path = config_dir / "gateway.env" + + was_existing = env_path.exists() + + if was_existing and not any([gateway_url, gateway_token, suppress_warning]): + return + + lines = [ + "# Central LLM gateway configuration", + "# Populate SPECIFY_GATEWAY_URL with your proxy endpoint.", + "# Populate SPECIFY_GATEWAY_TOKEN if authentication is required.", + f"SPECIFY_GATEWAY_URL={gateway_url or ''}", + f"SPECIFY_GATEWAY_TOKEN={gateway_token or ''}", + "# Set SPECIFY_SUPPRESS_GATEWAY_WARNING=true to silence CLI warnings.", + "SPECIFY_SUPPRESS_GATEWAY_WARNING=" + ("true" if suppress_warning else ""), + ] + + # Assistant guidance comments + assistant_comments = { + "claude": "# Claude Code uses ANTHROPIC_BASE_URL; the CLI will export it automatically when SPECIFY_GATEWAY_URL is set.", + "gemini": "# Gemini CLI uses GEMINI_BASE_URL; the CLI will export it automatically when SPECIFY_GATEWAY_URL is set.", + "qwen": "# Qwen/Codex style CLIs use OPENAI_BASE_URL; the CLI will export it automatically when SPECIFY_GATEWAY_URL is set.", + "opencode": "# OpenCode can reference the gateway via {env:SPECIFY_GATEWAY_URL} in opencode.json.", + } + + if selected_ai in assistant_comments: + lines.append(assistant_comments[selected_ai]) + + env_path.write_text("\n".join(lines) + "\n") + + if tracker: + tracker.add("gateway", "Create gateway configuration") + tracker.complete("gateway", "updated" if was_existing else "scaffolded template") + + def ensure_executable_scripts(project_path: Path, tracker: StepTracker | None = None) -> None: """Ensure POSIX .sh scripts under .specify/scripts (recursively) have execute bits (no-op on Windows).""" if os.name == "nt": @@ -816,6 +863,9 @@ def init( debug: bool = typer.Option(False, "--debug", help="Show verbose diagnostic output for network and extraction failures"), github_token: str = typer.Option(None, "--github-token", help="GitHub token to use for API requests (or set GH_TOKEN or GITHUB_TOKEN environment variable)"), team_ai_directives: str = typer.Option(None, "--team-ai-directive", "--team-ai-directives", help="Clone or update a team-ai-directives repository into .specify/memory"), + gateway_url: str = typer.Option(None, "--gateway-url", help="Optional central LLM gateway base URL (populates .specify/config/gateway.env)"), + gateway_token: str = typer.Option(None, "--gateway-token", help="Optional token used when calling the central LLM gateway"), + gateway_suppress_warning: bool = typer.Option(False, "--gateway-suppress-warning", help="Suppress gateway missing warnings for this project"), ): """ Initialize a new Specify project from the latest template. @@ -848,6 +898,7 @@ def init( specify init --here specify init --here --force # Skip confirmation when current directory not empty specify init my-project --team-ai-directive https://github.com/my-org/team-ai-directives.git + specify init my-project --gateway-url https://llm-proxy.internal --gateway-token $TOKEN """ # Show banner first show_banner() @@ -1056,6 +1107,14 @@ def init( # Ensure scripts are executable (POSIX) ensure_executable_scripts(project_path, tracker=tracker) + ensure_gateway_config( + project_path, + selected_ai, + tracker=tracker, + gateway_url=gateway_url, + gateway_token=gateway_token, + suppress_warning=gateway_suppress_warning, + ) if team_ai_directives and team_ai_directives.strip(): tracker.start("directives", "syncing") From 03e49a73c275feb251491b46ecc2177ba94dca7d Mon Sep 17 00:00:00 2001 From: Lior Kanfi Date: Fri, 3 Oct 2025 17:16:09 +0300 Subject: [PATCH 05/95] add support to local team-ai-directive --- scripts/bash/common.sh | 29 +++++++++++++++++++++++++ scripts/bash/create-new-feature.sh | 14 +++++++++++- scripts/bash/setup-plan.sh | 11 +++++++++- src/specify_cli/__init__.py | 34 ++++++++++++++++++++++++------ tests/test_team_directives.py | 16 ++++++++++++-- 5 files changed, 93 insertions(+), 11 deletions(-) diff --git a/scripts/bash/common.sh b/scripts/bash/common.sh index 1bfb8bdb9a..16d74e8da0 100644 --- a/scripts/bash/common.sh +++ b/scripts/bash/common.sh @@ -1,6 +1,9 @@ #!/usr/bin/env bash # Common functions and variables for all scripts +# Shared constants +TEAM_DIRECTIVES_DIRNAME="team-ai-directives" + # Load gateway configuration and export helper environment variables load_gateway_config() { local repo_root="$1" @@ -30,6 +33,30 @@ load_gateway_config() { fi } +load_team_directives_config() { + local repo_root="$1" + if [[ -n "${SPECIFY_TEAM_DIRECTIVES:-}" ]]; then + return + fi + + local config_file="$repo_root/.specify/config/team_directives.path" + if [[ -f "$config_file" ]]; then + local path + path=$(cat "$config_file" 2>/dev/null) + if [[ -n "$path" && -d "$path" ]]; then + export SPECIFY_TEAM_DIRECTIVES="$path" + return + else + echo "[specify] Warning: team directives path '$path' from $config_file is unavailable." >&2 + fi + fi + + local default_dir="$repo_root/.specify/memory/$TEAM_DIRECTIVES_DIRNAME" + if [[ -d "$default_dir" ]]; then + export SPECIFY_TEAM_DIRECTIVES="$default_dir" + fi +} + # Get repository root, with fallback for non-git repositories get_repo_root() { if git rev-parse --show-toplevel >/dev/null 2>&1; then @@ -115,6 +142,7 @@ get_feature_dir() { echo "$1/specs/$2"; } get_feature_paths() { local repo_root=$(get_repo_root) load_gateway_config "$repo_root" + load_team_directives_config "$repo_root" local current_branch=$(get_current_branch) local has_git_repo="false" @@ -137,6 +165,7 @@ DATA_MODEL='$feature_dir/data-model.md' QUICKSTART='$feature_dir/quickstart.md' CONTEXT='$feature_dir/context.md' CONTRACTS_DIR='$feature_dir/contracts' +TEAM_DIRECTIVES='${SPECIFY_TEAM_DIRECTIVES:-}' EOF } diff --git a/scripts/bash/create-new-feature.sh b/scripts/bash/create-new-feature.sh index 5130c9fa11..1bfb9d4e16 100755 --- a/scripts/bash/create-new-feature.sh +++ b/scripts/bash/create-new-feature.sh @@ -62,7 +62,19 @@ else CONSTITUTION_FILE="" fi -TEAM_DIRECTIVES_DIR="$REPO_ROOT/.specify/memory/$TEAM_DIRECTIVES_DIRNAME" +if [ -z "$SPECIFY_TEAM_DIRECTIVES" ]; then + CONFIG_TEAM_FILE="$REPO_ROOT/.specify/config/team_directives.path" + if [ -f "$CONFIG_TEAM_FILE" ]; then + CONFIG_TEAM_PATH=$(cat "$CONFIG_TEAM_FILE") + if [ -d "$CONFIG_TEAM_PATH" ]; then + export SPECIFY_TEAM_DIRECTIVES="$CONFIG_TEAM_PATH" + else + >&2 echo "[specify] Warning: team directives path '$CONFIG_TEAM_PATH' from $CONFIG_TEAM_FILE is missing." + fi + fi +fi + +TEAM_DIRECTIVES_DIR="${SPECIFY_TEAM_DIRECTIVES:-$REPO_ROOT/.specify/memory/$TEAM_DIRECTIVES_DIRNAME}" if [ -d "$TEAM_DIRECTIVES_DIR" ]; then export SPECIFY_TEAM_DIRECTIVES="$TEAM_DIRECTIVES_DIR" else diff --git a/scripts/bash/setup-plan.sh b/scripts/bash/setup-plan.sh index a71b2a337c..e05ededf04 100755 --- a/scripts/bash/setup-plan.sh +++ b/scripts/bash/setup-plan.sh @@ -33,6 +33,12 @@ eval $(get_feature_paths) # Check if we're on a proper feature branch (only for git repos) check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1 +# Resolve team directives path if provided +if [[ -n "$TEAM_DIRECTIVES" && ! -d "$TEAM_DIRECTIVES" ]]; then + echo "ERROR: TEAM_DIRECTIVES path $TEAM_DIRECTIVES is not accessible." >&2 + exit 1 +fi + # Ensure the feature directory exists mkdir -p "$FEATURE_DIR" @@ -71,7 +77,10 @@ else CONSTITUTION_FILE="" fi -TEAM_DIRECTIVES_DIR="${SPECIFY_TEAM_DIRECTIVES:-}" +TEAM_DIRECTIVES_DIR="${TEAM_DIRECTIVES:-}" +if [[ -z "$TEAM_DIRECTIVES_DIR" ]]; then + TEAM_DIRECTIVES_DIR="${SPECIFY_TEAM_DIRECTIVES:-}" +fi if [[ -z "$TEAM_DIRECTIVES_DIR" ]]; then TEAM_DIRECTIVES_DIR="$REPO_ROOT/.specify/memory/team-ai-directives" fi diff --git a/src/specify_cli/__init__.py b/src/specify_cli/__init__.py index 4c03ed5ec6..63b61e69e9 100644 --- a/src/specify_cli/__init__.py +++ b/src/specify_cli/__init__.py @@ -399,15 +399,21 @@ def _run_git_command(args: list[str], cwd: Path | None = None, *, env: dict[str, return subprocess.run(cmd, check=True, capture_output=True, text=True, env=env) -def sync_team_ai_directives(repo_url: str, project_root: Path, *, skip_tls: bool = False) -> str: - """Clone or update the team-ai-directives repository under .specify/memory. +def sync_team_ai_directives(repo_url: str, project_root: Path, *, skip_tls: bool = False) -> tuple[str, Path]: + """Clone or update the team-ai-directives repository. - Returns a short status string describing the action performed. + If repo_url is a local path, return it directly and skip cloning. + Returns a tuple of (status, path_to_use). """ repo_url = (repo_url or "").strip() if not repo_url: raise ValueError("Team AI directives repository URL cannot be empty") + # Detect local path + potential_path = Path(repo_url).expanduser() + if potential_path.exists() and potential_path.is_dir(): + return ("local", potential_path.resolve()) + memory_root = project_root / ".specify" / "memory" memory_root.mkdir(parents=True, exist_ok=True) destination = memory_root / TEAM_DIRECTIVES_DIRNAME @@ -432,14 +438,14 @@ def sync_team_ai_directives(repo_url: str, project_root: Path, *, skip_tls: bool _run_git_command(["remote", "set-url", "origin", repo_url], cwd=destination, env=git_env) _run_git_command(["pull", "--ff-only"], cwd=destination, env=git_env) - return "updated" + return ("updated", destination) if destination.exists() and not any(destination.iterdir()): shutil.rmtree(destination) memory_root.mkdir(parents=True, exist_ok=True) _run_git_command(["clone", repo_url, str(destination)], env=git_env) - return "cloned" + return ("cloned", destination) except subprocess.CalledProcessError as exc: message = exc.stderr.strip() if exc.stderr else str(exc) raise RuntimeError(f"Git operation failed: {message}") from exc @@ -1116,11 +1122,13 @@ def init( suppress_warning=gateway_suppress_warning, ) + resolved_team_directives: Path | None = None if team_ai_directives and team_ai_directives.strip(): tracker.start("directives", "syncing") try: - directives_status = sync_team_ai_directives(team_ai_directives, project_path, skip_tls=skip_tls) - tracker.complete("directives", directives_status) + status, resolved_path = sync_team_ai_directives(team_ai_directives, project_path, skip_tls=skip_tls) + resolved_team_directives = resolved_path + tracker.complete("directives", status) except Exception as e: tracker.error("directives", str(e)) raise @@ -1165,6 +1173,18 @@ def init( # Final static tree (ensures finished state visible after Live context ends) console.print(tracker.render()) console.print("\n[bold green]Project ready.[/bold green]") + + # Persist team directives path if available + if resolved_team_directives is None: + default_dir = project_path / ".specify" / "memory" / TEAM_DIRECTIVES_DIRNAME + if default_dir.exists(): + resolved_team_directives = default_dir + + if resolved_team_directives is not None: + os.environ["SPECIFY_TEAM_DIRECTIVES"] = str(resolved_team_directives) + config_dir = project_path / ".specify" / "config" + config_dir.mkdir(parents=True, exist_ok=True) + (config_dir / "team_directives.path").write_text(str(resolved_team_directives)) # Agent folder security notice agent_folder_map = { diff --git a/tests/test_team_directives.py b/tests/test_team_directives.py index aa52fc8818..8982082e32 100644 --- a/tests/test_team_directives.py +++ b/tests/test_team_directives.py @@ -18,9 +18,10 @@ def fake_run(cmd, check, capture_output, text, env=None): monkeypatch.setattr(subprocess, "run", fake_run) - status = sync_team_ai_directives("https://example.com/repo.git", tmp_path, skip_tls=True) + status, path = sync_team_ai_directives("https://example.com/repo.git", tmp_path, skip_tls=True) assert status == "cloned" + assert path == tmp_path / ".specify" / "memory" / TEAM_DIRECTIVES_DIRNAME memory_root = tmp_path / ".specify" / "memory" assert memory_root.exists() assert calls[0][0][:2] == ["git", "clone"] @@ -42,9 +43,10 @@ def fake_run(cmd, check, capture_output, text, env=None): monkeypatch.setattr(subprocess, "run", fake_run) - status = sync_team_ai_directives("https://example.com/repo.git", tmp_path) + status, path = sync_team_ai_directives("https://example.com/repo.git", tmp_path) assert status == "updated" + assert path == destination assert any(item[3] == "pull" for item in commands if len(item) > 3) assert commands[0][:4] == ["git", "-C", str(destination), "rev-parse"] @@ -81,3 +83,13 @@ def fake_run(cmd, check, capture_output, text, env=None): sync_team_ai_directives("https://example.com/repo.git", tmp_path) assert "fatal: error" in str(exc.value) + + +def test_sync_returns_local_path_when_given_directory(tmp_path): + local_repo = tmp_path / "team-ai-directives" + local_repo.mkdir() + + status, path = sync_team_ai_directives(str(local_repo), tmp_path) + + assert status == "local" + assert path == local_repo From d22b02180e5fc1bbb6ab085754a91f0be18d24c9 Mon Sep 17 00:00:00 2001 From: Lior Kanfi Date: Fri, 3 Oct 2025 18:00:10 +0300 Subject: [PATCH 06/95] orange theme --- .roo/commands/analyze.md | 101 +++ .roo/commands/clarify.md | 158 ++++ .roo/commands/constitution.md | 73 ++ .roo/commands/implement.md | 64 ++ .roo/commands/levelup.md | 62 ++ .roo/commands/plan.md | 52 ++ .roo/commands/specify.md | 30 + .roo/commands/tasks.md | 68 ++ .specify/config/gateway.env | 7 + .specify/memory/constitution.md | 50 ++ .specify/scripts/bash/check-prerequisites.sh | 178 +++++ .specify/scripts/bash/common.sh | 173 +++++ .specify/scripts/bash/create-new-feature.sh | 178 +++++ .specify/scripts/bash/prepare-levelup.sh | 65 ++ .specify/scripts/bash/setup-plan.sh | 114 +++ .specify/scripts/bash/update-agent-context.sh | 719 ++++++++++++++++++ .specify/templates/agent-file-template.md | 23 + .specify/templates/plan-template.md | 227 ++++++ .specify/templates/spec-template.md | 116 +++ .specify/templates/tasks-template.md | 129 ++++ src/specify_cli/__init__.py | 152 ++-- 21 files changed, 2671 insertions(+), 68 deletions(-) create mode 100644 .roo/commands/analyze.md create mode 100644 .roo/commands/clarify.md create mode 100644 .roo/commands/constitution.md create mode 100644 .roo/commands/implement.md create mode 100644 .roo/commands/levelup.md create mode 100644 .roo/commands/plan.md create mode 100644 .roo/commands/specify.md create mode 100644 .roo/commands/tasks.md create mode 100644 .specify/config/gateway.env create mode 100644 .specify/memory/constitution.md create mode 100755 .specify/scripts/bash/check-prerequisites.sh create mode 100755 .specify/scripts/bash/common.sh create mode 100755 .specify/scripts/bash/create-new-feature.sh create mode 100755 .specify/scripts/bash/prepare-levelup.sh create mode 100755 .specify/scripts/bash/setup-plan.sh create mode 100755 .specify/scripts/bash/update-agent-context.sh create mode 100644 .specify/templates/agent-file-template.md create mode 100644 .specify/templates/plan-template.md create mode 100644 .specify/templates/spec-template.md create mode 100644 .specify/templates/tasks-template.md diff --git a/.roo/commands/analyze.md b/.roo/commands/analyze.md new file mode 100644 index 0000000000..f4c1a7bd97 --- /dev/null +++ b/.roo/commands/analyze.md @@ -0,0 +1,101 @@ +--- +description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation. +--- + +The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty). + +User input: + +$ARGUMENTS + +Goal: Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/tasks` has successfully produced a complete `tasks.md`. + +STRICTLY READ-ONLY: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually). + +Constitution Authority: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/analyze`. + +Execution steps: + +1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths: + - SPEC = FEATURE_DIR/spec.md + - PLAN = FEATURE_DIR/plan.md + - TASKS = FEATURE_DIR/tasks.md + Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command). + +2. Load artifacts: + - Parse spec.md sections: Overview/Context, Functional Requirements, Non-Functional Requirements, User Stories, Edge Cases (if present). + - Parse plan.md: Architecture/stack choices, Data Model references, Phases, Technical constraints. + - Parse tasks.md: Task IDs, descriptions, phase grouping, parallel markers [P], referenced file paths. + - Load constitution `.specify/memory/constitution.md` for principle validation. + +3. Build internal semantic models: + - Requirements inventory: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" -> `user-can-upload-file`). + - User story/action inventory. + - Task coverage mapping: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases). + - Constitution rule set: Extract principle names and any MUST/SHOULD normative statements. + +4. Detection passes: + A. Duplication detection: + - Identify near-duplicate requirements. Mark lower-quality phrasing for consolidation. + B. Ambiguity detection: + - Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria. + - Flag unresolved placeholders (TODO, TKTK, ???, , etc.). + C. Underspecification: + - Requirements with verbs but missing object or measurable outcome. + - User stories missing acceptance criteria alignment. + - Tasks referencing files or components not defined in spec/plan. + D. Constitution alignment: + - Any requirement or plan element conflicting with a MUST principle. + - Missing mandated sections or quality gates from constitution. + E. Coverage gaps: + - Requirements with zero associated tasks. + - Tasks with no mapped requirement/story. + - Non-functional requirements not reflected in tasks (e.g., performance, security). + F. Inconsistency: + - Terminology drift (same concept named differently across files). + - Data entities referenced in plan but absent in spec (or vice versa). + - Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note). + - Conflicting requirements (e.g., one requires to use Next.js while other says to use Vue as the framework). + +5. Severity assignment heuristic: + - CRITICAL: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality. + - HIGH: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion. + - MEDIUM: Terminology drift, missing non-functional task coverage, underspecified edge case. + - LOW: Style/wording improvements, minor redundancy not affecting execution order. + +6. Produce a Markdown report (no file writes) with sections: + + ### Specification Analysis Report + | ID | Category | Severity | Location(s) | Summary | Recommendation | + |----|----------|----------|-------------|---------|----------------| + | A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version | + (Add one row per finding; generate stable IDs prefixed by category initial.) + + Additional subsections: + - Coverage Summary Table: + | Requirement Key | Has Task? | Task IDs | Notes | + - Constitution Alignment Issues (if any) + - Unmapped Tasks (if any) + - Metrics: + * Total Requirements + * Total Tasks + * Coverage % (requirements with >=1 task) + * Ambiguity Count + * Duplication Count + * Critical Issues Count + +7. At end of report, output a concise Next Actions block: + - If CRITICAL issues exist: Recommend resolving before `/implement`. + - If only LOW/MEDIUM: User may proceed, but provide improvement suggestions. + - Provide explicit command suggestions: e.g., "Run /specify with refinement", "Run /plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'". + +8. Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.) + +Behavior rules: +- NEVER modify files. +- NEVER hallucinate missing sections—if absent, report them. +- KEEP findings deterministic: if rerun without changes, produce consistent IDs and counts. +- LIMIT total findings in the main table to 50; aggregate remainder in a summarized overflow note. +- If zero issues found, emit a success report with coverage statistics and proceed recommendation. + +Context: $ARGUMENTS diff --git a/.roo/commands/clarify.md b/.roo/commands/clarify.md new file mode 100644 index 0000000000..26ff530bd1 --- /dev/null +++ b/.roo/commands/clarify.md @@ -0,0 +1,158 @@ +--- +description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec. +--- + +The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty). + +User input: + +$ARGUMENTS + +Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file. + +Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases. + +Execution steps: + +1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields: + - `FEATURE_DIR` + - `FEATURE_SPEC` + - (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.) + - If JSON parsing fails, abort and instruct user to re-run `/specify` or verify feature branch environment. + +2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked). + + Functional Scope & Behavior: + - Core user goals & success criteria + - Explicit out-of-scope declarations + - User roles / personas differentiation + + Domain & Data Model: + - Entities, attributes, relationships + - Identity & uniqueness rules + - Lifecycle/state transitions + - Data volume / scale assumptions + + Interaction & UX Flow: + - Critical user journeys / sequences + - Error/empty/loading states + - Accessibility or localization notes + + Non-Functional Quality Attributes: + - Performance (latency, throughput targets) + - Scalability (horizontal/vertical, limits) + - Reliability & availability (uptime, recovery expectations) + - Observability (logging, metrics, tracing signals) + - Security & privacy (authN/Z, data protection, threat assumptions) + - Compliance / regulatory constraints (if any) + + Integration & External Dependencies: + - External services/APIs and failure modes + - Data import/export formats + - Protocol/versioning assumptions + + Edge Cases & Failure Handling: + - Negative scenarios + - Rate limiting / throttling + - Conflict resolution (e.g., concurrent edits) + + Constraints & Tradeoffs: + - Technical constraints (language, storage, hosting) + - Explicit tradeoffs or rejected alternatives + + Terminology & Consistency: + - Canonical glossary terms + - Avoided synonyms / deprecated terms + + Completion Signals: + - Acceptance criteria testability + - Measurable Definition of Done style indicators + + Misc / Placeholders: + - TODO markers / unresolved decisions + - Ambiguous adjectives ("robust", "intuitive") lacking quantification + + For each category with Partial or Missing status, add a candidate question opportunity unless: + - Clarification would not materially change implementation or validation strategy + - Information is better deferred to planning phase (note internally) + +3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints: + - Maximum of 5 total questions across the whole session. + - Each question must be answerable with EITHER: + * A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR + * A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words"). + - Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation. + - Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved. + - Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness). + - Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests. + - If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic. + +4. Sequential questioning loop (interactive): + - Present EXACTLY ONE question at a time. + - For multiple‑choice questions render options as a Markdown table: + + | Option | Description | + |--------|-------------| + | A |