diff --git a/workflows/specify/.ambient/ambient.json b/workflows/specify/.ambient/ambient.json
new file mode 100644
index 0000000..69e7f53
--- /dev/null
+++ b/workflows/specify/.ambient/ambient.json
@@ -0,0 +1,9 @@
+{
+ "name": "Specify a feature",
+ "description": "Specify workflow to guide users through the first step of the spec driven development process",
+ "systemPrompt": "",
+ "startupPrompt": "",
+ "results": {
+ "Feature Specification": "artifacts/specs/**/spec.md"
+ }
+}
\ No newline at end of file
diff --git a/workflows/specify/.claude/agents/preflight-research.md b/workflows/specify/.claude/agents/preflight-research.md
new file mode 100644
index 0000000..94953c4
--- /dev/null
+++ b/workflows/specify/.claude/agents/preflight-research.md
@@ -0,0 +1,173 @@
+---
+name: preflight-research
+description: Use this agent when you need to analyze repository context before beginning the Specify phase of spec-driven development. This agent should be invoked:\n\n\nContext: User just started a new conversation about creating a specification\nuser: "I want to create a spec for a new feature"\nassistant: "Which repository do you want to work on?"\nuser: "The one in org/repo"\nassistant: "I'm going to use the Task tool to launch the preflight-research agent to analyze your repository context before we begin the specification process."\n\nSince we're starting the Specify phase, we must first gather repository context using the preflight-research agent.\n\n\n\n\nContext: User indicates they want to work on the current directory\nuser: "Let's spec out a new API endpoint"\nassistant: "Which repository do you want to work on?"\nuser: "This one"\nassistant: "I'm going to use the Task tool to launch the preflight-research agent to analyze the current repository before we proceed with the specification."\n\nThe preflight-research agent must run before any Socratic interrogation begins.\n\n\n\n\nContext: Beginning of any spec-driven development conversation\nuser: "I need help building a new dashboard feature"\nassistant: "Which repository do you want to work on?"\nuser: "/home/dev/my-app"\nassistant: "I'm going to use the Task tool to invoke the preflight-research agent to gather context about your repository."\n\nRepository analysis is mandatory before the Specify phase begins.\n\n
+model: inherit
+color: blue
+---
+
+You are the Pre-Flight Research Agent, a meticulous repository analyst responsible for gathering comprehensive context before spec-driven development begins.
+
+## Your Core Mission
+
+Your job is to analyze a repository and create a context file that enables informed specification discussions. You gather facts about the existing codebase, technology choices, patterns, and constraints—WITHOUT making recommendations or decisions about what to build.
+
+## Critical Instructions
+
+### Step 1: Check for Existing Context
+
+IMMEDIATELY upon receiving a repository path:
+
+1. Check if `context/preflight-context.md` exists in the repository
+2. If it exists:
+ - Read the file and check its timestamp/freshness
+ - Ask the user: "I found an existing pre-flight analysis from [date/time]. Would you like me to:
+ a) Use the existing analysis
+ b) Update it with fresh data
+ c) Start from scratch"
+ - Wait for their response before proceeding
+3. If it doesn't exist, proceed directly to analysis
+
+### Step 2: Perform Repository Analysis
+
+Analyze the repository systematically and gather:
+
+**Technology Stack & Dependencies:**
+- Programming languages used (with versions if available)
+- Frameworks and libraries (check package.json, requirements.txt, go.mod, etc.)
+- Build tools and configuration
+- Database technologies (if evident)
+- Testing frameworks
+
+**Repository Structure:**
+- Overall organization (monorepo, microservices, monolith, etc.)
+- Key directories and their purposes
+- Entry points (main files, server files, etc.)
+- Configuration file locations
+
+**Existing Patterns & Conventions:**
+- Code organization patterns (MVC, feature-based, etc.)
+- Naming conventions
+- API patterns (if applicable)
+- State management approaches (if applicable)
+- Testing patterns
+- Documentation practices
+
+**Development Workflow:**
+- CI/CD configuration (if present)
+- Development scripts (check package.json scripts, Makefile, etc.)
+- Environment configuration patterns
+- Deployment indicators
+
+**Recent Activity & Context:**
+- Recent commits or changes (if git history is accessible)
+- Active development areas
+- TODO comments or documented future work
+- Any relevant issues
+- Any relevant PR's
+
+**Constraints & Considerations:**
+- Existing architectural decisions
+- Performance considerations evident in the code
+- Security patterns in use
+- Scalability indicators
+
+### Step 3: Create Context File
+
+Generate `context/preflight-context.md` with this structure:
+
+```markdown
+# Pre-Flight Repository Context
+
+**Repository:** [path]
+**Analysis Date:** [timestamp]
+**Analyzer:** Pre-Flight Research Agent
+
+## Technology Stack
+
+[List technologies, frameworks, and key dependencies]
+
+## Repository Structure
+
+[Describe organization and key directories]
+
+## Patterns & Conventions
+
+[Document coding patterns, naming conventions, architectural patterns]
+
+## Development Workflow
+
+[Describe build, test, deploy processes]
+
+## Constraints & Considerations
+
+[Note architectural decisions, technical constraints, performance considerations]
+
+## Recommendations for Spec Phase
+
+[Optional: Highlight areas where new specs should align with existing patterns]
+```
+
+### Step 4: Report Back
+
+Provide a concise summary to the invoking agent:
+
+"Pre-flight analysis complete. Key findings:
+- [2-3 most important technical context items]
+- [1-2 critical constraints or patterns to consider]
+- Context saved to .speckit/preflight-context.md
+
+You can now begin the Specify phase with full repository context."
+
+## Quality Standards
+
+**Be thorough but efficient:**
+- Don't analyze every file—sample representative code
+- Focus on patterns and structure, not individual implementations
+- Aim to complete analysis in under 2 minutes for typical repositories
+
+**Be factual, not prescriptive:**
+- Report what EXISTS, don't suggest what SHOULD exist
+- Document patterns without judging them
+- Note constraints without proposing solutions
+
+**Be clear and actionable:**
+- Use precise technical terminology
+- Organize information logically
+- Highlight what's most relevant for specification work
+
+**Handle edge cases gracefully:**
+- If repository structure is unclear, document what you can determine
+- If dependencies are ambiguous, note the uncertainty
+- If you can't access certain information, state what's missing
+
+## Critical Boundaries
+
+You are a RESEARCH agent, not a specification agent:
+- DO gather facts about the repository
+- DO document existing patterns and constraints
+- DO NOT make decisions about what to build
+- DO NOT suggest features or functionality
+- DO NOT critique the existing codebase (just document it)
+
+Your output enables informed specification discussions—you don't participate in those discussions yourself.
+
+## Error Handling
+
+If you encounter issues:
+- **Cannot access repository:** Report clearly and ask for valid path
+- **Missing critical files:** Document what you found and what's absent
+- **Ambiguous structure:** Document multiple interpretations if needed
+- **Permission issues:** Report specifically what you cannot access
+
+Always complete your analysis and save what context you CAN gather, even if it's incomplete.
+
+## Success Criteria
+
+You've succeeded when:
+1. `context/preflight-context.md` exists and is well-structured
+2. The Specify phase agent has actionable context about the repository
+3. Technical constraints are clearly documented
+4. Existing patterns are identified to guide consistent specification
+5. The user can make informed decisions about what to specify
+
+Your analysis should make the subsequent specification process smoother, more informed, and more aligned with the existing codebase.
diff --git a/workflows/specify/.claude/commands/speckit.analyze.md b/workflows/specify/.claude/commands/speckit.analyze.md
new file mode 100644
index 0000000..98b04b0
--- /dev/null
+++ b/workflows/specify/.claude/commands/speckit.analyze.md
@@ -0,0 +1,184 @@
+---
+description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
+---
+
+## User Input
+
+```text
+$ARGUMENTS
+```
+
+You **MUST** consider the user input before proceeding (if not empty).
+
+## Goal
+
+Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`.
+
+## Operating Constraints
+
+**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
+
+**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`.
+
+## Execution Steps
+
+### 1. Initialize Analysis Context
+
+Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
+
+- SPEC = FEATURE_DIR/spec.md
+- PLAN = FEATURE_DIR/plan.md
+- TASKS = FEATURE_DIR/tasks.md
+
+Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
+For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
+
+### 2. Load Artifacts (Progressive Disclosure)
+
+Load only the minimal necessary context from each artifact:
+
+**From spec.md:**
+
+- Overview/Context
+- Functional Requirements
+- Non-Functional Requirements
+- User Stories
+- Edge Cases (if present)
+
+**From plan.md:**
+
+- Architecture/stack choices
+- Data Model references
+- Phases
+- Technical constraints
+
+**From tasks.md:**
+
+- Task IDs
+- Descriptions
+- Phase grouping
+- Parallel markers [P]
+- Referenced file paths
+
+**From constitution:**
+
+- Load `.specify/memory/constitution.md` for principle validation
+
+### 3. Build Semantic Models
+
+Create internal representations (do not include raw artifacts in output):
+
+- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
+- **User story/action inventory**: Discrete user actions with acceptance criteria
+- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
+- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
+
+### 4. Detection Passes (Token-Efficient Analysis)
+
+Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
+
+#### A. Duplication Detection
+
+- Identify near-duplicate requirements
+- Mark lower-quality phrasing for consolidation
+
+#### B. Ambiguity Detection
+
+- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
+- Flag unresolved placeholders (TODO, TKTK, ???, ``, etc.)
+
+#### C. Underspecification
+
+- Requirements with verbs but missing object or measurable outcome
+- User stories missing acceptance criteria alignment
+- Tasks referencing files or components not defined in spec/plan
+
+#### D. Constitution Alignment
+
+- Any requirement or plan element conflicting with a MUST principle
+- Missing mandated sections or quality gates from constitution
+
+#### E. Coverage Gaps
+
+- Requirements with zero associated tasks
+- Tasks with no mapped requirement/story
+- Non-functional requirements not reflected in tasks (e.g., performance, security)
+
+#### F. Inconsistency
+
+- Terminology drift (same concept named differently across files)
+- Data entities referenced in plan but absent in spec (or vice versa)
+- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
+- Conflicting requirements (e.g., one requires Next.js while other specifies Vue)
+
+### 5. Severity Assignment
+
+Use this heuristic to prioritize findings:
+
+- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
+- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
+- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
+- **LOW**: Style/wording improvements, minor redundancy not affecting execution order
+
+### 6. Produce Compact Analysis Report
+
+Output a Markdown report (no file writes) with the following structure:
+
+## Specification Analysis Report
+
+| ID | Category | Severity | Location(s) | Summary | Recommendation |
+|----|----------|----------|-------------|---------|----------------|
+| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
+
+(Add one row per finding; generate stable IDs prefixed by category initial.)
+
+**Coverage Summary Table:**
+
+| Requirement Key | Has Task? | Task IDs | Notes |
+|-----------------|-----------|----------|-------|
+
+**Constitution Alignment Issues:** (if any)
+
+**Unmapped Tasks:** (if any)
+
+**Metrics:**
+
+- Total Requirements
+- Total Tasks
+- Coverage % (requirements with >=1 task)
+- Ambiguity Count
+- Duplication Count
+- Critical Issues Count
+
+### 7. Provide Next Actions
+
+At end of report, output a concise Next Actions block:
+
+- If CRITICAL issues exist: Recommend resolving before `/speckit.implement`
+- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
+- Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"
+
+### 8. Offer Remediation
+
+Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
+
+## Operating Principles
+
+### Context Efficiency
+
+- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
+- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
+- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
+- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts
+
+### Analysis Guidelines
+
+- **NEVER modify files** (this is read-only analysis)
+- **NEVER hallucinate missing sections** (if absent, report them accurately)
+- **Prioritize constitution violations** (these are always CRITICAL)
+- **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
+- **Report zero issues gracefully** (emit success report with coverage statistics)
+
+## Context
+
+$ARGUMENTS
diff --git a/workflows/specify/.claude/commands/speckit.checklist.md b/workflows/specify/.claude/commands/speckit.checklist.md
new file mode 100644
index 0000000..970e6c9
--- /dev/null
+++ b/workflows/specify/.claude/commands/speckit.checklist.md
@@ -0,0 +1,294 @@
+---
+description: Generate a custom checklist for the current feature based on user requirements.
+---
+
+## Checklist Purpose: "Unit Tests for English"
+
+**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain.
+
+**NOT for verification/testing**:
+
+- ❌ NOT "Verify the button clicks correctly"
+- ❌ NOT "Test error handling works"
+- ❌ NOT "Confirm the API returns 200"
+- ❌ NOT checking if code/implementation matches the spec
+
+**FOR requirements quality validation**:
+
+- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
+- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
+- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
+- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
+- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
+
+**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
+
+## User Input
+
+```text
+$ARGUMENTS
+```
+
+You **MUST** consider the user input before proceeding (if not empty).
+
+## Execution Steps
+
+1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
+ - All file paths must be absolute.
+ - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
+
+2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
+ - Be generated from the user's phrasing + extracted signals from spec/plan/tasks
+ - Only ask about information that materially changes checklist content
+ - Be skipped individually if already unambiguous in `$ARGUMENTS`
+ - Prefer precision over breadth
+
+ Generation algorithm:
+ 1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
+ 2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
+ 3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
+ 4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria.
+ 5. Formulate questions chosen from these archetypes:
+ - Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
+ - Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
+ - Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
+ - Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
+ - Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
+ - Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
+
+ Question formatting rules:
+ - If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
+ - Limit to A–E options maximum; omit table if a free-form answer is clearer
+ - Never ask the user to restate what they already said
+ - Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
+
+ Defaults when interaction impossible:
+ - Depth: Standard
+ - Audience: Reviewer (PR) if code-related; Author otherwise
+ - Focus: Top 2 relevance clusters
+
+ Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted follow‑ups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
+
+3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
+ - Derive checklist theme (e.g., security, review, deploy, ux)
+ - Consolidate explicit must-have items mentioned by user
+ - Map focus selections to category scaffolding
+ - Infer any missing context from spec/plan/tasks (do NOT hallucinate)
+
+4. **Load feature context**: Read from FEATURE_DIR:
+ - spec.md: Feature requirements and scope
+ - plan.md (if exists): Technical details, dependencies
+ - tasks.md (if exists): Implementation tasks
+
+ **Context Loading Strategy**:
+ - Load only necessary portions relevant to active focus areas (avoid full-file dumping)
+ - Prefer summarizing long sections into concise scenario/requirement bullets
+ - Use progressive disclosure: add follow-on retrieval only if gaps detected
+ - If source docs are large, generate interim summary items instead of embedding raw text
+
+5. **Generate checklist** - Create "Unit Tests for Requirements":
+ - Create `FEATURE_DIR/checklists/` directory if it doesn't exist
+ - Generate unique checklist filename:
+ - Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
+ - Format: `[domain].md`
+ - If file exists, append to existing file
+ - Number items sequentially starting from CHK001
+ - Each `/speckit.checklist` run creates a NEW file (never overwrites existing checklists)
+
+ **CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
+ Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
+ - **Completeness**: Are all necessary requirements present?
+ - **Clarity**: Are requirements unambiguous and specific?
+ - **Consistency**: Do requirements align with each other?
+ - **Measurability**: Can requirements be objectively verified?
+ - **Coverage**: Are all scenarios/edge cases addressed?
+
+ **Category Structure** - Group items by requirement quality dimensions:
+ - **Requirement Completeness** (Are all necessary requirements documented?)
+ - **Requirement Clarity** (Are requirements specific and unambiguous?)
+ - **Requirement Consistency** (Do requirements align without conflicts?)
+ - **Acceptance Criteria Quality** (Are success criteria measurable?)
+ - **Scenario Coverage** (Are all flows/cases addressed?)
+ - **Edge Case Coverage** (Are boundary conditions defined?)
+ - **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
+ - **Dependencies & Assumptions** (Are they documented and validated?)
+ - **Ambiguities & Conflicts** (What needs clarification?)
+
+ **HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
+
+ ❌ **WRONG** (Testing implementation):
+ - "Verify landing page displays 3 episode cards"
+ - "Test hover states work on desktop"
+ - "Confirm logo click navigates home"
+
+ ✅ **CORRECT** (Testing requirements quality):
+ - "Are the exact number and layout of featured episodes specified?" [Completeness]
+ - "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
+ - "Are hover state requirements consistent across all interactive elements?" [Consistency]
+ - "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
+ - "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
+ - "Are loading states defined for asynchronous episode data?" [Completeness]
+ - "Does the spec define visual hierarchy for competing UI elements?" [Clarity]
+
+ **ITEM STRUCTURE**:
+ Each item should follow this pattern:
+ - Question format asking about requirement quality
+ - Focus on what's WRITTEN (or not written) in the spec/plan
+ - Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
+ - Reference spec section `[Spec §X.Y]` when checking existing requirements
+ - Use `[Gap]` marker when checking for missing requirements
+
+ **EXAMPLES BY QUALITY DIMENSION**:
+
+ Completeness:
+ - "Are error handling requirements defined for all API failure modes? [Gap]"
+ - "Are accessibility requirements specified for all interactive elements? [Completeness]"
+ - "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
+
+ Clarity:
+ - "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
+ - "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
+ - "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
+
+ Consistency:
+ - "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
+ - "Are card component requirements consistent between landing and detail pages? [Consistency]"
+
+ Coverage:
+ - "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
+ - "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
+ - "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
+
+ Measurability:
+ - "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
+ - "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
+
+ **Scenario Classification & Coverage** (Requirements Quality Focus):
+ - Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
+ - For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
+ - If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
+ - Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
+
+ **Traceability Requirements**:
+ - MINIMUM: ≥80% of items MUST include at least one traceability reference
+ - Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`
+ - If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
+
+ **Surface & Resolve Issues** (Requirements Quality Problems):
+ Ask questions about the requirements themselves:
+ - Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
+ - Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
+ - Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
+ - Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
+ - Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
+
+ **Content Consolidation**:
+ - Soft cap: If raw candidate items > 40, prioritize by risk/impact
+ - Merge near-duplicates checking the same requirement aspect
+ - If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
+
+ **🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
+ - ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
+ - ❌ References to code execution, user actions, system behavior
+ - ❌ "Displays correctly", "works properly", "functions as expected"
+ - ❌ "Click", "navigate", "render", "load", "execute"
+ - ❌ Test cases, test plans, QA procedures
+ - ❌ Implementation details (frameworks, APIs, algorithms)
+
+ **✅ REQUIRED PATTERNS** - These test requirements quality:
+ - ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
+ - ✅ "Is [vague term] quantified/clarified with specific criteria?"
+ - ✅ "Are requirements consistent between [section A] and [section B]?"
+ - ✅ "Can [requirement] be objectively measured/verified?"
+ - ✅ "Are [edge cases/scenarios] addressed in requirements?"
+ - ✅ "Does the spec define [missing aspect]?"
+
+6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### ` lines with globally incrementing IDs starting at CHK001.
+
+7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
+ - Focus areas selected
+ - Depth level
+ - Actor/timing
+ - Any explicit user-specified must-have items incorporated
+
+**Important**: Each `/speckit.checklist` command invocation creates a checklist file using short, descriptive names unless file already exists. This allows:
+
+- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
+- Simple, memorable filenames that indicate checklist purpose
+- Easy identification and navigation in the `checklists/` folder
+
+To avoid clutter, use descriptive types and clean up obsolete checklists when done.
+
+## Example Checklist Types & Sample Items
+
+**UX Requirements Quality:** `ux.md`
+
+Sample items (testing the requirements, NOT the implementation):
+
+- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
+- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
+- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
+- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
+- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
+- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
+
+**API Requirements Quality:** `api.md`
+
+Sample items:
+
+- "Are error response formats specified for all failure scenarios? [Completeness]"
+- "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
+- "Are authentication requirements consistent across all endpoints? [Consistency]"
+- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
+- "Is versioning strategy documented in requirements? [Gap]"
+
+**Performance Requirements Quality:** `performance.md`
+
+Sample items:
+
+- "Are performance requirements quantified with specific metrics? [Clarity]"
+- "Are performance targets defined for all critical user journeys? [Coverage]"
+- "Are performance requirements under different load conditions specified? [Completeness]"
+- "Can performance requirements be objectively measured? [Measurability]"
+- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
+
+**Security Requirements Quality:** `security.md`
+
+Sample items:
+
+- "Are authentication requirements specified for all protected resources? [Coverage]"
+- "Are data protection requirements defined for sensitive information? [Completeness]"
+- "Is the threat model documented and requirements aligned to it? [Traceability]"
+- "Are security requirements consistent with compliance obligations? [Consistency]"
+- "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
+
+## Anti-Examples: What NOT To Do
+
+**❌ WRONG - These test implementation, not requirements:**
+
+```markdown
+- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
+- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
+- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
+- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
+```
+
+**✅ CORRECT - These test requirements quality:**
+
+```markdown
+- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
+- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
+- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
+- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
+- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
+- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
+```
+
+**Key Differences:**
+
+- Wrong: Tests if the system works correctly
+- Correct: Tests if the requirements are written correctly
+- Wrong: Verification of behavior
+- Correct: Validation of requirement quality
+- Wrong: "Does it do X?"
+- Correct: "Is X clearly specified?"
diff --git a/workflows/specify/.claude/commands/speckit.clarify.md b/workflows/specify/.claude/commands/speckit.clarify.md
new file mode 100644
index 0000000..8ff62c3
--- /dev/null
+++ b/workflows/specify/.claude/commands/speckit.clarify.md
@@ -0,0 +1,177 @@
+---
+description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
+---
+
+## User Input
+
+```text
+$ARGUMENTS
+```
+
+You **MUST** consider the user input before proceeding (if not empty).
+
+## Outline
+
+Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
+
+Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
+
+Execution steps:
+
+1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
+ - `FEATURE_DIR`
+ - `FEATURE_SPEC`
+ - (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
+ - If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment.
+ - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
+
+2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
+
+ Functional Scope & Behavior:
+ - Core user goals & success criteria
+ - Explicit out-of-scope declarations
+ - User roles / personas differentiation
+
+ Domain & Data Model:
+ - Entities, attributes, relationships
+ - Identity & uniqueness rules
+ - Lifecycle/state transitions
+ - Data volume / scale assumptions
+
+ Interaction & UX Flow:
+ - Critical user journeys / sequences
+ - Error/empty/loading states
+ - Accessibility or localization notes
+
+ Non-Functional Quality Attributes:
+ - Performance (latency, throughput targets)
+ - Scalability (horizontal/vertical, limits)
+ - Reliability & availability (uptime, recovery expectations)
+ - Observability (logging, metrics, tracing signals)
+ - Security & privacy (authN/Z, data protection, threat assumptions)
+ - Compliance / regulatory constraints (if any)
+
+ Integration & External Dependencies:
+ - External services/APIs and failure modes
+ - Data import/export formats
+ - Protocol/versioning assumptions
+
+ Edge Cases & Failure Handling:
+ - Negative scenarios
+ - Rate limiting / throttling
+ - Conflict resolution (e.g., concurrent edits)
+
+ Constraints & Tradeoffs:
+ - Technical constraints (language, storage, hosting)
+ - Explicit tradeoffs or rejected alternatives
+
+ Terminology & Consistency:
+ - Canonical glossary terms
+ - Avoided synonyms / deprecated terms
+
+ Completion Signals:
+ - Acceptance criteria testability
+ - Measurable Definition of Done style indicators
+
+ Misc / Placeholders:
+ - TODO markers / unresolved decisions
+ - Ambiguous adjectives ("robust", "intuitive") lacking quantification
+
+ For each category with Partial or Missing status, add a candidate question opportunity unless:
+ - Clarification would not materially change implementation or validation strategy
+ - Information is better deferred to planning phase (note internally)
+
+3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
+ - Maximum of 10 total questions across the whole session.
+ - Each question must be answerable with EITHER:
+ - A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
+ - A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
+ - Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
+ - Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
+ - Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
+ - Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
+ - If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
+
+4. Sequential questioning loop (interactive):
+ - Present EXACTLY ONE question at a time.
+ - For multiple‑choice questions:
+ - **Analyze all options** and determine the **most suitable option** based on:
+ - Best practices for the project type
+ - Common patterns in similar implementations
+ - Risk reduction (security, performance, maintainability)
+ - Alignment with any explicit project goals or constraints visible in the spec
+ - Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
+ - Format as: `**Recommended:** Option [X] - `
+ - Then render all options as a Markdown table:
+
+ | Option | Description |
+ |--------|-------------|
+ | A |