CtrlK
BlogDocsLog inGet started
Tessl Logo

giuseppe-trisciuoglio/developer-kit

Comprehensive developer toolkit providing reusable skills for Java/Spring Boot, TypeScript/NestJS/React/Next.js, Python, PHP, AWS CloudFormation, AI/RAG, DevOps, and more.

90

Quality

90%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Risky

Do not use without reviewing

This version of the tile failed moderation
Moderation pipeline encountered an internal error
Overview
Quality
Evals
Security
Files

specs.spec-to-tasks.mdplugins/developer-kit-specs/commands/

description:
Provides capability to convert functional specifications into executable and trackable tasks. Use when needing to transform a spec from devkit.brainstorm into a task list. Output: docs/specs/[id]/YYYY-MM-DD-feature-name--tasks.md plus individual task files
argument-hint:
[ --lang=java|spring|typescript|nestjs|react|python|general ] [ --spec="spec-folder" ]
allowed-tools:
Task, Read, Write, Edit, Bash, Grep, Glob, TodoWrite, AskUserQuestion
model:
inherit

Specification to Tasks

Converts a functional specification into a list of executable, trackable tasks. This is the bridge between WHAT (specification) and HOW (implementation).

Overview

This command reads a functional specification generated by /specs:brainstorm and converts it into atomic, executable tasks.

Input: docs/specs/[id]/YYYY-MM-DD--feature-name.md Output:

  • Task list: docs/specs/[id]/YYYY-MM-DD--feature-name--tasks.md
  • Individual tasks: docs/specs/[id]/tasks/TASK-XXX.md

Task Structure

Each task includes:

  • Title: Descriptive name for the task
  • Description: Functional description of what to implement
  • Acceptance Criteria: Testable conditions for completion
  • Dependencies: Other tasks that must complete first (if any)
  • Implementation Command: Pre-filled command to execute this task

Workflow Position

Idea → Functional Specification → Architecture & Ontology Definition → Tasks → Implementation → Review → Code Cleanup → Done
              (brainstorm)         (this: Phase 1.5)                   (this)   (task-implementation)  (task-review)   (code-cleanup)

Task Count Limit

CRITICAL: If task decomposition produces more than 15 implementation tasks, the specification is too large for a single implementation cycle and MUST be rejected:

  1. Detect oversized spec: After Phase 4 (Task Decomposition), count implementation tasks (excluding e2e and cleanup tasks)

  2. If > 15 tasks:

    • STOP task generation immediately
    • Inform the user with this message:
    Specification Too Large
    
    This specification would generate X implementation tasks, which exceeds the maximum of 15.
    The scope is too large for a single implementation cycle.
    
    Recommended action:
    1. Return to /specs:brainstorm
    2. Split your idea into 2 or more smaller, focused specifications
    3. Run /specs:spec-to-tasks for each specification separately
    
    Example split strategy:
    - Spec Part 1: Core functionality (must-have for initial release)
    - Spec Part 2: Extended features (phase 2 or nice-to-have)
    - Spec Part 3: Additional capabilities (future iterations)
    
    This ensures each specification has a clear functional scope and manageable implementation scope.
    • Use AskUserQuestion to offer options:
      • Options:
        • "Return to /specs:brainstorm to split this idea" (recommended)
        • "Continue anyway with this large specification" (not recommended - proceed at user risk)
    • If user chooses "Return to brainstorm": abort task generation and suggest running /specs:brainstorm with the split idea
    • If user chooses "Continue anyway": proceed with warning logged in summary
  3. If <= 15 tasks: Proceed normally with task generation

Usage

# Basic usage - specify spec file or folder
/specs:spec-to-tasks docs/specs/001-hotel-search-aggregation/
/specs:spec-to-tasks docs/specs/001-hotel-search-aggregation/2026-03-07--hotel-search-aggregation.md

# With language specification
/specs:spec-to-tasks --lang=spring docs/specs/001-user-auth/
/specs:spec-to-tasks --lang=typescript docs/specs/001-user-auth/
/specs:spec-to-tasks --lang=nestjs docs/specs/001-user-auth/
/specs:spec-to-tasks --lang=react docs/specs/001-user-auth/
/specs:spec-to-tasks --lang=python docs/specs/001-user-auth/
/specs:spec-to-tasks --lang=general docs/specs/001-user-auth/

Arguments

ArgumentRequiredDescription
--langRecommendedTarget language/framework: java, spring, typescript, nestjs, react, python, php, general. Required for codebase analysis and technical task generation
spec-fileNoPath to spec file or spec folder (e.g., docs/specs/001-feature-name/, docs/specs/001-feature-name/2026-03-07--feature-name.md, or legacy *-specs.md)

Current Context

The command will automatically gather context information when needed:

  • Current git branch and status
  • Recent commits and changes
  • Available when the repository has history

You are converting a functional specification into executable tasks. Follow a systematic approach: analyze requirements, identify dependencies, generate atomic tasks, and create a trackable task list.

Core Principles

  • Atomic tasks: Each task should be implementable in a single focused effort
  • Clear dependencies: Explicitly state which tasks depend on others
  • Testable criteria: Each task must have clear acceptance criteria
  • Technical detail: Tasks include technical context from codebase analysis
  • Codebase-aware: Tasks reference existing patterns, APIs, and structures in the project
  • Use TodoWrite: Track all progress throughout
  • No time estimates: DO NOT provide or request time estimates
  • Test instructions: Each task MUST include explicit and detailed instructions on what to test, specifying test types (unit, integration) and behaviors to verify, NEVER including test code.
  • Mandatory final tasks: Every spec MUST end with (1) an e2e test task and (2) a code cleanup task
  • Spec size limit: If > 15 implementation tasks, reject and recommend returning to brainstorm

Phase 1: Specification Analysis

Goal: Read and understand the functional specification

Input: $ARGUMENTS (spec file or folder path)

Actions:

  1. Create todo list with all phases

  2. Parse $ARGUMENTS to extract:

    • --lang parameter (language/framework for implementation)
    • spec-path (path to spec file or folder)
  3. Determine the spec folder:

    • If a file path is provided: use its parent directory and that file as the specification
    • If a folder path is provided: use the folder directly
    • Resolve the specification file with this priority:
      1. YYYY-MM-DD--feature-name.md
      2. the only dated spec-like markdown file in the folder excluding --tasks.md, decision-log.md, traceability-matrix.md, user-request.md, and brainstorming-notes.md
  4. Read the resolved functional specification file

  5. CRITICAL: Look for user context files in the spec folder:

    • Search for these specific files that contain the original user request:
      • user-request.md - Original user request (from brainstorming - PRIMARY)
      • brainstorming-notes.md - Notes from brainstorming session (SECONDARY)
    • Read these files and incorporate their content into your analysis
    • These files are created by the brainstorming command and contain critical context
  6. Extract the spec ID from folder name (e.g., 001-hotel-search-aggregation)

  7. Verify the specification exists and is valid

  8. If file not found:

    • First auto-detect from the folder using the rules above
    • Only ask the user if multiple plausible spec files exist or none can be resolved
  9. Quality Pre-Check (Soft Gate):

    • Check for section ## Clarifications in the spec file (added by spec-review)
    • Search for vague terms: grep for "suitable|efficient|robust|fast|intuitive"
    • If Clarifications section is missing AND vague terms are found:
      • Warning: "Spec not reviewed. Vague terms detected: [list terms found]"
      • Ask via AskUserQuestion:
        • Options:
          • "Continue anyway" (proceed at user risk)
          • "Run spec-review first" (recommended: /devkit.spec-review docs/specs/[id]/)
    • If Clarifications section exists OR no vague terms found: proceed without warning

Phase 1.5: Architecture & Ontology Definition

Goal: Ensure the project-level architecture and ontology documents exist and are consistent before generating tasks. This phase bridges the gap between WHAT (functional specification) and HOW (technical tasks).

Context: The architecture and ontology documents live at the docs/specs/ level (shared across all specifications):

  • docs/specs/architecture.md — Formalizes technological and infrastructural choices
  • docs/specs/ontology.md — Establishes common domain language (Ubiquitous Language)

Step 1: Architecture Definition (docs/specs/architecture.md)

  1. Check if docs/specs/architecture.md exists:

    • If the file does NOT exist:

      1. Inform the user: "No project architecture document found. Before generating tasks, we need to define the project architecture."

      2. Use AskUserQuestion to gather architecture information through targeted questions:

        Question 1 — Software Stack:

        What is the primary technology stack for this project?
        • Options (adapt based on --lang parameter if provided):
          • "Java / Spring Boot"
          • "TypeScript / NestJS"
          • "TypeScript / React"
          • "Python / Django or FastAPI"
          • "PHP / Laravel or Symfony"
          • Freeform: allow custom answer

        Question 2 — Data Architecture:

        What database and data management approach does the project use?
        • Options:
          • "PostgreSQL (relational)"
          • "MySQL (relational)"
          • "MongoDB (document-based)"
          • "Multiple databases (polyglot persistence)"
          • Freeform: allow custom answer

        Question 3 — Infrastructure:

        What hosting and infrastructure approach is used?
        • Options:
          • "AWS (EC2, ECS, Lambda, etc.)"
          • "Docker / Docker Compose (local or self-hosted)"
          • "Kubernetes"
          • "Serverless (AWS Lambda, GCP Cloud Functions)"
          • "Not yet decided"
          • Freeform: allow custom answer
      3. Create docs/specs/architecture.md using the gathered information:

        # Project Architecture
        
        **Created**: [current date YYYY-MM-DD]
        **Last Updated**: [current date YYYY-MM-DD]
        
        ## Software Stack
        
        | Component | Technology | Notes |
        |-----------|-----------|-------|
        | Language | [e.g., TypeScript] | [version if known] |
        | Framework | [e.g., NestJS] | [version if known] |
        | Key Libraries | [e.g., Drizzle ORM, Passport] | |
        
        ## Data Architecture
        
        | Component | Technology | Notes |
        |-----------|-----------|-------|
        | Primary Database | [e.g., PostgreSQL] | |
        | Caching | [e.g., Redis, none] | |
        | ORM / Data Access | [e.g., Drizzle, Hibernate] | |
        | Migrations | [e.g., Flyway, Drizzle Kit] | |
        
        ## Infrastructure
        
        | Component | Technology | Notes |
        |-----------|-----------|-------|
        | Hosting | [e.g., AWS ECS] | |
        | CI/CD | [e.g., GitHub Actions] | |
        | Containerization | [e.g., Docker] | |
        | Orchestration | [e.g., Kubernetes, none] | |
        
        ## Architecture Decisions
        
        > Significant modifications to this architecture document must be tracked
        > via **ADR (Architecture Decision Records)** using the `adr-drafting` skill.
        >
        > ADR location: `docs/architecture/adr/` (or project-specific convention)
      4. Log the creation and present to the user for final confirmation

    • If the file ALREADY exists:

      1. Read docs/specs/architecture.md
      2. Load the architecture context into memory for use in task generation (Phase 4)
      3. Briefly summarize what was loaded:
        Loaded project architecture:
        - Stack: [language/framework]
        - Database: [database]
        - Infrastructure: [hosting]
      4. Check for conflicts: If the --lang parameter conflicts with the architecture document (e.g., --lang=spring but architecture says TypeScript), warn the user via AskUserQuestion:
        • "The --lang parameter ([lang]) doesn't match the architecture document ([architecture stack]). Which should I use?"
        • Options: "Use --lang parameter", "Use architecture document", "Update architecture document"

Step 2: Ontology Definition (docs/specs/ontology.md)

  1. Check if docs/specs/ontology.md exists:

    • If the file does NOT exist:

      1. Extract domain terms from the specification loaded in Phase 1
      2. Use AskUserQuestion to present identified terms and gather additional ones:
        I identified the following domain terms from the specification:
        - [Term 1]: [proposed definition]
        - [Term 2]: [proposed definition]
        - ...
        
        Should I create the project ontology with these terms? You can also add or adjust terms.
        • Options:
          • "Yes, create with these terms" (recommended)
          • "Yes, but let me adjust the terms first"
          • "Skip ontology creation for now"
      3. If confirmed, create docs/specs/ontology.md:
        # Project Ontology — Ubiquitous Language
        
        **Created**: [current date YYYY-MM-DD]
        **Last Updated**: [current date YYYY-MM-DD]
        
        ## Domain Glossary
        
        | Term | Definition | Bounded Context |
        |------|-----------|-----------------|
        | [Term 1] | [Definition] | [Context where this term applies] |
        | [Term 2] | [Definition] | [Context where this term applies] |
        
        ## Bounded Contexts
        
        | Context | Description | Key Terms |
        |---------|-------------|-----------|
        | [Context 1] | [Description of this bounded context] | [Key terms] |
        
        ## Conceptual Mapping
        
        [Relationships between key domain entities]
    • If the file ALREADY exists:

      1. Read docs/specs/ontology.md
      2. Load the ontology context into memory for use in task generation
      3. Extract domain terms from the current specification
      4. Compare against existing glossary entries
      5. If NEW terms are identified:
        • Append them to the Domain Glossary table
        • Update the Last Updated date
        • Inform the user of the additions
      6. If no new terms: continue silently

Step 3: Context Summary

After both documents are processed, produce a brief summary:

Architecture & Ontology Context:
- Architecture: [loaded/created] — [stack summary]
- Ontology: [loaded/created/skipped] — [N terms in glossary]
- Both documents will inform task generation in Phase 4.

Phase 2: Requirement Extraction

Goal: Extract and organize requirements from the specification

Actions:

  1. Analyze the specification for:

    • User stories and use cases
    • Business rules
    • Acceptance criteria
    • Integration requirements
  2. CRITICAL: Include technical requirements from context files:

    • Review user-request.md and brainstorming-notes.md found in Phase 1
    • Extract technical details that were provided during brainstorming:
      • Architecture patterns (message queues, caching, async processing)
      • Specific technologies mentioned (Redis, RabbitMQ, specific databases)
      • Integration patterns and external services
      • Any implementation hints or technical constraints
    • These technical requirements must be reflected in the task decomposition
    • Example: If context mentions "RabbitMQ queues", there must be tasks for queue configuration and consumers
  3. Group related requirements:

    • Identify naturally atomic units of work
    • Note dependencies between requirements
    • Prioritize based on dependencies (what can be done first)
  4. CRITICAL: Verify against original user request:

    • Using the user-request.md content already read in Phase 1
    • Compare the extracted requirements against the original user request
    • Identify any requirements that are mentioned in the original request but NOT captured in the specification
    • If discrepancies found, use AskUserQuestion to present them to the user:
      • "I found requirements from your original request that may not be in the specification:"
        • [list missing requirements]
      • Options:
        • "Add missing requirements to task list" (recommended)
        • "Regenerate specification to include them"
        • "Continue with current specification"
  5. Present the extracted requirements structure (including technical requirements) to user for confirmation

  6. Assign unique REQ-IDs to each extracted requirement:

    • For each requirement identified (from user stories, business rules, acceptance criteria), assign a unique identifier: REQ-001, REQ-002, etc.
    • Document REQ-IDs in a requirements list for later traceability:
      REQ-001: [User story / requirement text]
      REQ-002: [Business rule / requirement text]
      REQ-003: [Acceptance criterion / requirement text]
      ...
    • These REQ-IDs will be used in the Traceability Matrix to map requirements → tasks → tests → code

Phase 2.5: Check/Load Knowledge Graph

Goal: Check if cached codebase analysis exists from previous runs

Prerequisite: Requires --lang parameter and spec folder path

Actions:

  1. Check for existing Knowledge Graph:

    • Look for knowledge-graph.json in the spec folder
    • If file doesn't exist, skip to Phase 3 (Codebase Analysis)
  2. If Knowledge Graph exists:

    • Read metadata.updated_at timestamp
    • Calculate age: current_time - updated_at
    • Load and summarize key findings:
      • Count of patterns discovered
      • Count of components (controllers, services, repositories)
      • Count of APIs (internal/external)
      • Technology stack identified
  3. Present summary to user:

    Found cached codebase analysis from X days ago:
    - Y architectural patterns (Repository, Service Layer, etc.)
    - Z components (N controllers, M services, K repositories)
    - Q API endpoints documented
    - Technology stack: [framework] [version]
    
    The analysis is [fresh/getting stale/old].
  4. Choose reuse strategy automatically unless the case is borderline:

    • If KG is < 7 days old: use cached analysis automatically
    • If KG is > 30 days old: re-explore automatically
    • If KG is 7-30 days old: ask user via AskUserQuestion whether to reuse or refresh
  5. Based on the chosen strategy:

    • Use cached: Load full KG into context, skip Phase 3, proceed to Phase 4
    • Re-explore: Proceed to Phase 3 (Codebase Analysis)
  6. If using cached KG:

    • Query KG for all patterns: query knowledge-graph [spec-folder] patterns
    • Query KG for all components: query knowledge-graph [spec-folder] components
    • Query KG for all APIs: query knowledge-graph [spec-folder] apis
    • Store results in context for Phase 4 (Task Decomposition)
    • Note: "Proceeding with cached analysis from X days ago"
  7. Check and Load Global Knowledge Graph (if exists):

    • Look for docs/specs/.global-knowledge-graph.json
    • If file exists:
      • Extract project-level patterns that apply across all specs
      • Load patterns.architectural and patterns.conventions
      • Note: "Also loaded N global patterns from project-level analysis"
      • Use global patterns as supplementary context (project conventions to consider)

Phase 3: Codebase Analysis

Goal: Understand existing codebase to generate technically accurate tasks

Prerequisite: This phase requires --lang parameter to select appropriate agents

Actions:

  1. Based on --lang parameter, select appropriate codebase exploration agent:
LanguageAgent
java / springdeveloper-kit-java:java-software-architect-review
typescript / nestjsdeveloper-kit-typescript:typescript-software-architect-review
reactdeveloper-kit-typescript:react-software-architect-review
pythondeveloper-kit-python:python-software-architect-expert
phpdeveloper-kit-php:php-software-architect-expert
generaldeveloper-kit:general-code-explorer
  1. Launch the agent with a language-specific prompt to explore the codebase:

For java / spring:

Explore the Java/Spring Boot codebase to understand:

1. **Project Structure**:
   - Package organization (domain-driven, layered, etc.)
   - Build configuration (Maven/Gradle, pom.xml/build.gradle)
   - Main application class and entry points

2. **Spring Patterns**:
   - Spring Data JPA repositories and entity mapping
   - Spring Security configuration and auth patterns
   - REST controller conventions (@RestController, @RequestMapping)
   - Service layer patterns (@Service, transaction management)
   - Configuration properties (@ConfigurationProperties)

3. **Data Layer**:
   - Entity/DTO patterns
   - Database migrations (Flyway, Liquibase)
   - ORM patterns (Hibernate)

4. **Testing Patterns**:
   - Test directory structure
   - Testing conventions (JUnit 5, Mockito)
   - Integration test setup

Provide a summary that will inform task generation with Spring-specific context.

For typescript / nestjs:

Explore the TypeScript/NestJS codebase to understand:

1. **Project Structure**:
   - Module organization
   - TypeScript configuration (tsconfig.json)
   - NestJS module structure

2. **NestJS Patterns**:
   - Controller conventions (@Controller, @Get, @Post, etc.)
   - Service layer patterns (@Injectable, providers)
   - Module organization (@Module)
   - Dependency injection setup
   - Guards and interceptors

3. **Data Access**:
   - ORM usage (TypeORM, Drizzle, Prisma)
   - Repository patterns
   - Database migrations

4. **Testing Patterns**:
   - Jest configuration
   - Unit vs integration test structure

Provide a summary that will inform task generation with NestJS-specific context.

For react:

Explore the React codebase to understand:

1. **Project Structure**:
   - App organization (Next.js, Remix, or CRA/Vite)
   - Routing structure
   - Component directory layout

2. **React Patterns**:
   - Component patterns (functional, hooks)
   - State management (Context, Redux, Zustand, etc.)
   - API communication (React Query, SWR, fetch)
   - Form handling patterns

3. **Styling**:
   - CSS approach (CSS modules, Tailwind, styled-components)
   - Component library usage

4. **Testing Patterns**:
   - Testing library (Jest, Vitest, React Testing Library)
   - Component testing conventions

Provide a summary that will inform task generation with React-specific context.

For python:

Explore the Python codebase to understand:

1. **Project Structure**:
   - Package organization
   - requirements.txt, setup.py, or pyproject.toml
   - Entry points (main.py, __main__.py)

2. **Python Patterns**:
   - Web framework (Django, FastAPI, Flask)
   - Data models (SQLAlchemy, Pydantic, Django ORM)
   - API patterns (REST, GraphQL)
   - Authentication patterns

3. **Testing Patterns**:
   - pytest configuration
   - Test directory structure
   - Mocking conventions

Provide a summary that will inform task generation with Python-specific context.

For php:

Explore the PHP codebase to understand:

1. **Project Structure**:
   - Composer-based project organization
   - Laravel directory structure or custom MVC

2. **PHP Patterns**:
   - Framework conventions (Laravel, Symfony)
   - ORM usage (Eloquent, Doctrine)
   - Controller patterns
   - Routing and middleware

3. **Testing Patterns**:
   - PHPUnit configuration
   - Feature vs unit test structure

Provide a summary that will inform task generation with PHP-specific context.

For general:

Explore the codebase to understand:

1. **Project Structure**:
   - Main directories and their purpose
   - Configuration files (package.json, pom.xml, requirements.txt, etc.)
   - Entry points and main modules

2. **Existing Patterns**:
   - Data models/schemas used
   - API patterns (REST, GraphQL, etc.)
   - Authentication/authorization patterns
   - Database access patterns (ORM, raw queries, etc.)
   - Error handling patterns
   - Logging and monitoring approaches

3. **Technology Stack**:
   - Frameworks and libraries used
   - Database systems
   - External service integrations
   - Build and deployment tools

4. **Integration Points**:
   - Existing APIs the new feature must integrate with
   - Shared utilities or helper functions
   - Common components or services
   - Configuration management

5. **Code Organization**:
   - Layered architecture (if any)
   - Module boundaries
   - Dependency injection patterns
   - Testing patterns and conventions

Provide a comprehensive summary that will inform task generation.
  1. Collect and synthesize the codebase analysis
  2. Document key findings that will influence task generation:
    • Existing patterns to follow
    • APIs to integrate with
    • Shared components to use
    • Conventions to respect

Phase 3.5: Update Knowledge Graph

Goal: Persist agent discoveries into the Knowledge Graph for future reuse

Prerequisite: Phase 3 (Codebase Analysis) must have completed

Actions:

  1. Extract structured findings from agent analysis:

    • Parse the agent's comprehensive analysis output
    • Map findings to KG schema sections:
      • patterns.architectural: Design patterns discovered (Repository, Service Layer, etc.)
      • patterns.conventions: Coding conventions (naming, testing, etc.)
      • components: Code components identified (controllers, services, repositories, entities)
      • apis.internal: REST endpoints and API structure
      • apis.external: External service integrations
      • integration_points: Database, cache, message queues, etc.
  2. Construct KG update object:

    {
      "metadata": {
        "spec_id": "[extracted from folder]",
        "feature_name": "[extracted from folder]",
        "updated_at": "[current ISO timestamp]",
        "analysis_sources": [
          {
            "agent": "[agent-type-used]",
            "timestamp": "[current ISO timestamp]",
            "focus": "codebase analysis for task generation"
          }
        ]
      },
      "codebase_context": {
        "project_structure": { /* from agent analysis */ },
        "technology_stack": { /* from agent analysis */ }
      },
      "patterns": {
        "architectural": [ /* patterns discovered */ ],
        "conventions": [ /* conventions identified */ ]
      },
      "components": {
        "controllers": [ /* controllers found */ ],
        "services": [ /* services found */ ],
        "repositories": [ /* repositories found */ ],
        "entities": [ /* entities found */ ],
        "dtos": [ /* DTOs found */ ]
      },
      "apis": {
        "internal": [ /* endpoints discovered */ ],
        "external": [ /* external integrations */ ]
      },
      "integration_points": [ /* databases, caches, etc. */ ]
    }
  3. Update Knowledge Graph using spec-quality command:

    • Call: /specs:spec-quality [spec-folder] --update-kg-only
    • The spec-quality command will:
      • Create/update knowledge-graph.json with discovered patterns
      • Document components, APIs, and integration points
      • Update metadata.updated_at and metadata.analysis_sources
      • Generate summary report of changes
  4. Log and report:

    Knowledge Graph updated via spec-quality:
    - X architectural patterns documented
    - Y coding conventions identified
    - Z components catalogued (N controllers, M services, K repositories)
    - Q API endpoints documented
    - R integration points mapped
    
    Saved to: docs/specs/[ID]/knowledge-graph.json
  5. Verify update:

    • Read back the updated KG to confirm write succeeded
    • Check that metadata was updated correctly
    • If write failed, log warning but continue (non-blocking)

Note: If user chose to use cached KG in Phase 2.5, skip this phase and proceed directly to Phase 4.


Phase 4: Technical Task Decomposition

Goal: Break down requirements into atomic, executable tasks

Actions:

  1. If Knowledge Graph context is available (from Phase 2.5 cached or Phase 3.5 updated):
    • Review KG patterns: Architectural patterns to follow in each task
    • Review KG components: Existing components to reuse or integrate with
    • Review KG APIs: Internal/external APIs relevant to tasks
    • Review KG conventions: Naming, testing, and coding standards
    • Use KG context to enrich "Technical Context" section of each task
    • Example: "Follow existing Repository Pattern - extend JpaRepository"
    • Example: "Integrate with existing HotelService.searchHotels() method"

1.1. If Architecture context is available (from Phase 1.5):

  • Use the technology stack to inform implementation details in each task
  • Ensure tasks reference the correct frameworks, libraries, and patterns from docs/specs/architecture.md
  • If tasks require new infrastructure components not in the architecture document, flag them for ADR tracking using the adr-drafting skill
  • Example: "Use NestJS module pattern as defined in architecture.md"
  • Example: "Follow PostgreSQL with Drizzle ORM as specified in architecture"

1.2. If Ontology context is available (from Phase 1.5):

  • Use domain terms from docs/specs/ontology.md consistently in task titles, descriptions, and acceptance criteria
  • Ensure task descriptions use the canonical term from the glossary (avoid synonyms not defined in the ontology)
  • If a task introduces NEW domain concepts not in the ontology, add them to docs/specs/ontology.md and update the Last Updated date
  • Example: If ontology defines "Reservation" (not "Booking"), use "Reservation" in all task descriptions
  1. For each requirement group, create one or more tasks:

    • Each task should be implementable in 1-2 hours max
    • Tasks should have clear, testable completion criteria
    • Avoid tasks that span multiple user stories
  2. For each task, define:

    • Title: Concise, descriptive name (e.g., "User login functionality")
    • Description: What the task covers functionally
    • Acceptance Criteria: 2-4 testable conditions
    • Definition of Ready (DoR): Clear preconditions for starting (dependencies complete, technical context understood, blockers resolved)
    • Definition of Done (DoD): Clear completion conditions covering implementation, tests, and task handoff
    • Dependencies: List task IDs this depends on (if any)
  3. Map dependencies explicitly:

    • Identify which tasks must complete before others can start
    • CRITICAL: List ALL dependencies explicitly for each task BEFORE generating files
    • For each task, document: "This task depends on: [TASK-ID-1, TASK-ID-2] (or 'None')"
    • Identify potential circular dependencies (Task A depends on B, B depends on A)
    • Order tasks accordingly
  4. Validate dependencies before generating files:

    • Present the dependency structure in a clear table format:
    Task IDTitleDependencies
    TASK-001[Title]None
    TASK-002[Title]TASK-001
    TASK-003[Title]TASK-001, TASK-002
    .........
    • If there are circular dependencies, high coupling, or unclear ordering, use AskUserQuestion to confirm a fix
    • Otherwise proceed directly and include the dependency table in the generated summary
  5. Identify Test Requirements for Each Task: For each identified task, you must now precisely and mandatorily define what needs to be tested. This analysis will guide the generation of the "Test Instructions" section in the task file.

    • Analyze involved classes/components: For each file that the task will create or modify, determine its complexity level and testing importance.

      • High Priority (Mandatory Tests): Classes with business logic, state management, validation rules, complex calculations, algorithms, interactions with external services (API, database). Examples: Service, UseCase, Controller/Handler, Validator, complex Entities.
      • Medium Priority (Recommended Tests): Utility classes, helpers, simple data transformers, repositories (if not auto-generated).
      • Low Priority (Optional Tests): Simple DTOs, POCOs, configurations.
    • Define behaviors to test: For each high-priority component, list specific test scenarios. Do not generate code, but describe the behavior.

      • Unit Tests: Verify the behavior of a single unit in isolation.
        • Example (user registration task): "Test that the register(userData) method in UserService calls UserRepository.save() only if the email is unique and valid."
        • Example (price calculation task): "Test that the calculateTotal(price, tax, discount) function returns the correct value for valid inputs, for zero taxes, and for maximum discounts."
      • Integration Tests: Verify the interaction between multiple components (e.g., controller, service, database).
        • Example (user registration task): "Test that a POST request to the /api/register endpoint with valid data saves a new user in the database and returns status 201."
        • Example (payment integration task): "Test that the complete 'checkout' flow correctly calls the mock payment gateway and handles a success response."
    • Suggest test files to create: For each source file requiring tests, indicate the corresponding test file according to language conventions.

      • Java: UserService.javaUserServiceTest.java
      • TypeScript/NestJS: user.service.tsuser.service.spec.ts
      • Python: user_service.pytest_user_service.py
    • Link Tests to Acceptance Criteria: Ensure that for each functional acceptance criterion, there is at least one test scenario that verifies it. This step is critical for guaranteeing traceability.

  6. Present task structure to the user only if major restructuring, optional tasks, or scope gaps were detected. Otherwise generate the files directly and summarize the resulting plan.

  7. CRITICAL: Add Mandatory Final Tasks — After generating all implementation tasks, ALWAYS add these two final tasks:

    TASK-N-1: End-to-End (e2e) Test Task (where N is the next task number)

    • Title: "End-to-End Testing for [Feature Name]"
    • Description: Comprehensive e2e testing of the entire feature workflow from the user's perspective
    • Dependencies: ALL previous implementation tasks (TASK-001 through TASK-N-2)
    • Purpose: Validate that all implemented components work together correctly in a real-world scenario
    • Test Instructions must include:
      • Complete user journey through the feature (happy path)
      • Error handling and edge cases across multiple components
      • Data persistence and retrieval end-to-end
      • Integration with external systems (if applicable)
      • Performance benchmarks (if specified in requirements)
    • Files to Create:
      • [test-dir]/[feature-name].e2e.spec.ts (TypeScript/NestJS)
      • [test-dir]/[feature-name].e2e.test.tsx (React)
      • [test-dir]/[FeatureName]E2ETest.java (Java/Spring)
      • [test-dir]/test_[feature_name]_e2e.py (Python)
    • Implementation Command: /specs:task-implementation --lang=[language] --task="docs/specs/[id]/tasks/TASK-N-1.md"

    TASK-N: Code Cleanup & Workspace Hygiene Task (FINAL task)

    • Title: "Code Cleanup & Workspace Hygiene for [Feature Name]"
    • Description: Final cleanup using specs-code-cleanup skill to remove dead code, debug logs, and temporary files
    • Dependencies: TASK-N-1 (e2e test task)
    • Purpose: Ensure production-ready code quality and clean workspace
    • Must perform:
      • Remove all debug logs (console.log, System.out.println, print( statements)
      • Remove debug comments (// DEBUG:, /* TODO: remove */)
      • Remove temporary files created during development
      • Remove unused imports and optimize imports
      • Run code formatter (Spotless for Java, Prettier for TypeScript, Black for Python)
      • Fix indentation and line length (>120 chars)
      • Remove obviously safe dead code (unused private methods, unreachable code)
      • Verify documentation headers are complete
      • Run final linting and tests
    • Implementation Command: /specs:code-cleanup --lang=[language] --task="docs/specs/[id]/tasks/TASK-N.md"
    • CRITICAL: This task MUST use the specs-code-cleanup skill. Reference the skill documentation for exact procedures.
  8. Verify task count and spec size:

    • Count total implementation tasks (excluding e2e and cleanup tasks)
    • If > 15 implementation tasks: Trigger rejection logic (see Task Count Limit section)
      • STOP task generation
      • Present warning message to user
      • Offer to return to brainstorm or continue anyway
    • If <= 15 tasks: Proceed with task file generation

Phase 5: Task List Generation

Goal: Generate the task list markdown file and individual task files with technical details

Actions:

  1. Generate a unique task ID for each task (e.g., TASK-001, TASK-002)

  2. Extract feature name from folder (remove ID prefix, e.g., 001-hotel-search-aggregationhotel-search-aggregation)

  3. Create tasks directory: docs/specs/[id]/tasks/

  4. For each task, create an individual task file with technical details from codebase analysis:

    IMPORTANT: Always include test files in "Files to Create" section for any class that contains business logic, state management, validation, or complex behavior. Test files should be listed alongside source files with clear descriptions of what to test (e.g., "test state transitions", "test validation logic").

---
id: TASK-XXX
title: "[Task Title]"
spec: [resolved spec file path]
lang: [java|spring|typescript|nestjs|react|python|general]
status: pending
dependencies: [TASK-YYY if applicable]
---

# TASK-XXX: [Task Title]

**Functional Description**: [Functional description of what this task covers]

## Acceptance Criteria

- [ ] [Functional criterion 1]
- [ ] [Functional criterion 2]
- [ ] [Functional criterion 3 if needed]

## Definition of Ready (DoR)

Before starting this task, ensure:
- [ ] Dependencies are completed or explicitly marked as not required.
- [ ] Technical context, patterns, and integration points are understood.
- [ ] Files to create/modify are identified and accessible.
- [ ] Required tooling, commands, and local prerequisites are available.
- [ ] Open questions or blockers have been resolved.

## Technical Context (from Codebase Analysis)

- **Existing Patterns to Follow**: [patterns from codebase analysis]
- **APIs to Integrate With**: [existing APIs or services]
- **Shared Components**: [existing utilities, services, or modules to use]
- **Conventions**: [coding conventions, naming, structure, framework-specific patterns]
- **Architecture Reference**: [relevant entries from docs/specs/architecture.md — stack, data layer, infrastructure]
- **Domain Terms**: [relevant terms from docs/specs/ontology.md — use canonical names consistently]

## Implementation Details (File names only, no code)

**Files to Create**:
- `[path/source/1]` - [brief description of its purpose]
- `[path/source/2]` - [brief description of its purpose]
- `[path/test/1]` - [e.g., user.service.spec.ts]
- `[path/test/2]` - [e.g., user.controller.integration.spec.ts]

**Files to Modify** (if applicable):
- `[path/existing/1]` - [what modifications are needed]

## Test Instructions

This section describes **what** to test, not **how** to implement test code.

**1. Mandatory Unit Tests:**
   - `[Source Class/File Name 1]`:
     - [ ] Verify that [method/unit] correctly handles [success scenario].
     - [ ] Verify that [method/unit] throws an exception/error when [error scenario].
     - [ ] Verify that the [specific business rule] logic works as described in the specification.
   - `[Source Class/File Name 2]`:
     - [ ] Test validation of [specific field] with valid, invalid, and borderline values.

**2. Mandatory Integration Tests:**
   - `[Flow/Component Name]`:
     - [ ] Verify that the `[API endpoint]` endpoint with valid data correctly interacts with the database and returns the expected response (e.g., status 201, correct body).
     - [ ] Verify that a call to the `[API endpoint]` endpoint with invalid data **does not** modify the database state and returns an appropriate error (e.g., status 400).

**3. Edge Cases and Error Conditions to Test:**
   - [ ] Send missing or malformed data.
   - [ ] Simulate timeout or failure of an external service.
   - [ ] Test race conditions (if relevant, e.g., double booking).
   - [ ] Test with high data loads or boundary values (e.g., maximum length strings).

**Test Acceptance Criteria**:
   - [ ] All tests described above are implemented and pass.
   - [ ] Test coverage for classes with business logic is >= 80%.

## Definition of Done (DoD)

This task is complete when:
- [ ] Functional description is implemented end-to-end.
- [ ] All acceptance criteria are met with evidence in code or tests.
- [ ] Tests in this task are implemented or updated and passing.
- [ ] Required files are created or modified following the documented technical context.
- [ ] Any handoff expectations for dependent tasks are documented.

**Dependencies**: [TASK-YYY if applicable, otherwise "None"]

**Implementation Command**:
/specs:task-implementation --lang=[language] --task="docs/specs/[id]/tasks/TASK-XXX.md"
  1. Create the task list index file: docs/specs/[id]/YYYY-MM-DD--feature-name--tasks.md
# Task List: [Feature Name]

**Specification**: [resolved spec file path]
**Generated**: [current date]
**Language**: [language]

## Codebase Analysis Summary

- **Project Structure**: [summary from codebase analysis]
- **Key Patterns**: [patterns identified]
- **Integration Points**: [APIs/services to integrate with]

## Task Index

| Task ID | Title | Technical Focus | Status | Dependencies |
|---------|-------|-----------------|--------|--------------|
| [TASK-001](tasks/TASK-001.md) | Task title | [files/components] | [ ] | - |
| [TASK-002](tasks/TASK-002.md) | Task title | [files/components] | [ ] | TASK-001 |
| ... | ... | ... | ... | ... |
| [TASK-N-1](tasks/TASK-N-1.md) |  End-to-End Testing | [e2e test files] | [ ] | TASK-001, TASK-002, ... |
| [TASK-N](tasks/TASK-N.md) | Code Cleanup & Hygiene | [all modified files] | [ ] | TASK-N-1 |

**Legend**:
- [E2E] = End-to-end test task (validates entire feature workflow)
- [CLEANUP] = Code cleanup task (uses specs-code-cleanup skill)

## Tasks

Each task has its own detailed file with technical context:
- [TASK-001](tasks/TASK-001.md): Task title
- [TASK-002](tasks/TASK-002.md): Task title
- ...
- [TASK-N-1](tasks/TASK-N-1.md):  End-to-End Testing (validates entire feature)
- [TASK-N](tasks/TASK-N.md): Code Cleanup & Workspace Hygiene (final cleanup)

## Task Type Summary

- **Implementation Tasks** (TASK-001 to TASK-N-2): Core feature implementation
- **E2E Test Task** (TASK-N-1): End-to-end testing of complete workflow
- **Cleanup Task** (TASK-N): Final code quality and hygiene cleanup
  1. Save all files (including traceability-matrix.md from Phase 5.5)

Phase 5.5: Traceability Matrix Generation

Goal: Generate traceability matrix mapping requirements to tasks

Prerequisite: Phase 2 (Requirement Extraction with REQ-IDs) and Phase 4 (Technical Task Decomposition) completed

Actions:

  1. Map REQ-IDs to tasks:

    • For each REQ-ID assigned in Phase 2, identify which TASK-XXX covers it
    • A single requirement may be covered by multiple tasks
    • A single task may cover multiple requirements
  2. Generate traceability matrix file: Create docs/specs/[id]/traceability-matrix.md:

    # Traceability Matrix: [Feature Name]
    
    **Spec**: [resolved spec file path]
    **Generated**: YYYY-MM-DD
    **Last Updated**: YYYY-MM-DD
    
    ## Coverage Summary
    
    - **Requirements**: N total
    - **Covered by Tasks**: N/N (100%)
    - **With Tests**: N/N (X%)
    - **Implemented**: N/N (X%)
    
    ## Matrix
    
    | REQ ID | Requirement | Task(s) | Test Files | Code Files | Status |
    |--------|-------------|---------|------------|------------|--------|
    | REQ-001 | User can search by destination | TASK-001, TASK-003 | - | - | Pending |
    | REQ-002 | Results paginated | TASK-005 | - | - | Pending |
  3. Initialize matrix columns:

    • REQ ID: Identifier from Phase 2
    • Requirement: Brief description (first 50 chars)
    • Task(s): Comma-separated TASK-XXX list that cover this requirement
    • Test Files: Leave empty "-" (will be filled by task-review)
    • Code Files: Leave empty "-" (will be filled by task-review)
    • Status: "Pending" until implementation, then "Implemented" after task-review
  4. Calculate coverage summary:

    • Total requirements count (from REQ-IDs assigned)
    • Count requirements covered by at least one task (should be 100%)
    • Report coverage percentage in summary section

Phase 6: Review and Confirmation

Goal: Verify the task list quality

Actions:

  1. Present the generated task structure to the user:

    • Task list index: docs/specs/[id]/YYYY-MM-DD--feature-name--tasks.md
    • Individual tasks: docs/specs/[id]/tasks/TASK-XXX.md
  2. Ask for confirmation via AskUserQuestion:

    • Option A: Task list looks good, proceed to summary
    • Option B: Modify specific tasks (specify which)
    • Option C: Regenerate with different decomposition
  3. If modifications needed, return to Phase 3


Phase 7: Summary

Goal: Document what was accomplished

Actions:

  1. Mark all todos complete

  2. Summarize:

    • Specification Used: Path to input specification
    • Architecture: Loaded or created docs/specs/architecture.md — [stack summary]
    • Ontology: Loaded, created, or skipped docs/specs/ontology.md — [N terms]
    • Codebase Analyzed: Yes (language: [language])
    • Key Findings: [patterns, integration points, conventions]
    • Tasks Generated: Number of tasks created (breakdown: X implementation, 1 e2e test, 1 cleanup)
    • Dependency Structure: Brief overview of task dependencies
    • Spec Size Status: [If >15 tasks were detected: "WARNING: Spec exceeds 15-task limit. User chose to continue anyway" OR "Aborted: User returned to brainstorm to split specification"]
    • Output Files:
      • Task list: docs/specs/[id]/YYYY-MM-DD--feature-name--tasks.md
      • Individual tasks: docs/specs/[id]/tasks/TASK-XXX.md (with technical context)
      • E2E test task: docs/specs/[id]/tasks/TASK-N-1.md (depends on all implementation tasks)
      • Cleanup task: docs/specs/[id]/tasks/TASK-N.md (depends on e2e test task, uses specs-code-cleanup skill)
    • Next Step: Execute tasks using devkit.task-implementation command
    • Task Execution Order:
      1. Implementation tasks (TASK-001 to TASK-N-2) — in dependency order
      2. E2E test task (TASK-N-1) — validates entire feature workflow
      3. Cleanup task (TASK-N) — final code quality and hygiene cleanup
  3. Provide example commands for implementing tasks:

# Example: Implement a specific task
/specs:task-implementation --lang=[language] --task="docs/specs/001-feature-name/tasks/TASK-001.md"

# Example: List available tasks in the folder
ls docs/specs/001-feature-name/tasks/

Task Dependencies

When tasks have dependencies, the workflow is:

  1. Implement tasks with no dependencies first
  2. After completing a task, check if dependent tasks can now proceed
  3. Use the task list to track progress
  4. Each task file contains its dependencies in the frontmatter

Example Dependency Flow

docs/specs/001-user-auth/tasks/TASK-001.md (no dependencies)
    ↓
docs/specs/001-user-auth/tasks/TASK-002.md (depends on TASK-001)
    ↓
docs/specs/001-user-auth/tasks/TASK-003.md (depends on TASK-001)
    ↓
docs/specs/001-user-auth/tasks/TASK-004.md (depends on TASK-002)
    ↓
docs/specs/001-user-auth/tasks/TASK-005.md (depends on TASK-003, TASK-004) [Implementation]
    ↓
docs/specs/001-user-auth/tasks/TASK-006.md (E2E Tests - depends on TASK-001 to TASK-005)
    ↓
docs/specs/001-user-auth/tasks/TASK-007.md (Cleanup - depends on TASK-006)

Language Parameter Effects

The --lang parameter affects only the Implementation Command in each task file:

LanguageImplementation Command
javadevkit.task-implementation --lang=java --task="docs/specs/[id]/tasks/TASK-XXX.md"
springdevkit.task-implementation --lang=spring --task="docs/specs/[id]/tasks/TASK-XXX.md"
typescriptdevkit.task-implementation --lang=typescript --task="docs/specs/[id]/tasks/TASK-XXX.md"
nestjsdevkit.task-implementation --lang=nestjs --task="docs/specs/[id]/tasks/TASK-XXX.md"
reactdevkit.task-implementation --lang=react --task="docs/specs/[id]/tasks/TASK-XXX.md"
pythondevkit.task-implementation --lang=python --task="docs/specs/[id]/tasks/TASK-XXX.md"
phpdevkit.task-implementation --lang=php --task="docs/specs/[id]/tasks/TASK-XXX.md"
generaldevkit.task-implementation --lang=general --task="docs/specs/[id]/tasks/TASK-XXX.md"

Examples

Example 1: User Authentication (Spring Boot)

# Convert specification to tasks
/specs:spec-to-tasks --lang=spring --task=docs/specs/001-user-auth/

Output structure:

docs/specs/001-user-auth/
├── 2026-03-07--user-auth-specs.md
├── 2026-03-07--user-auth--tasks.md
└── tasks/
    ├── TASK-001.md (User registration endpoint)
    ├── TASK-002.md (Login endpoint)
    ├── TASK-003.md (Password reset)
    ├── TASK-004.md (JWT token management)
    ├── TASK-005.md (Session management)
    ├── TASK-006.md (End-to-End Testing)
    └── TASK-007.md (Code Cleanup & Workspace Hygiene)

Sample task file (TASK-001.md):

---
id: TASK-001
title: "User registration endpoint"
spec: docs/specs/001-user-auth/2026-03-07--user-auth-specs.md
lang: spring
dependencies: []
---

# TASK-001: User registration endpoint

**Functional Description**: Implement user registration with email validation

## Acceptance Criteria
- [ ] Users can register with a valid email and password.
- [ ] Duplicate email registrations are rejected.
- [ ] Passwords are persisted only after encoding.

## Definition of Ready (DoR)
- [ ] No prerequisite tasks are pending.
- [ ] Existing registration patterns and security conventions are understood.
- [ ] Required files and Spring test tooling are available locally.
- [ ] Validation and duplicate-email behavior are clear from the specification.

## Technical Context (from Codebase Analysis)
- **Existing Patterns to Follow**: REST controllers in src/main/java/.../controller/
- **APIs to Integrate With**: Existing UserRepository
- **Conventions**: @RestController, @Valid annotations

## Implementation Details (File names only, no code)

**Files to Create**:
- `src/main/java/.../controller/AuthController.java` - Controller for registration
- `src/main/java/.../service/UserService.java` - Business logic service
- `src/test/java/.../controller/AuthControllerTest.java` - Controller tests
- `src/test/java/.../service/UserServiceTest.java` - Service tests

**Files to Modify**:
- `src/main/java/.../config/SecurityConfig.java` - Add public endpoint

## Test Instructions

This section describes **what** to test, not **how** to implement test code.

**1. Mandatory Unit Tests:**
   - `UserService`:
     - [ ] Verify that the `register(userData)` method calls `UserRepository.save()` only if the email is unique.
     - [ ] Verify that `EmailAlreadyExistsException` is thrown when the email is already registered.
     - [ ] Verify that the password is encoded before saving.
   - `AuthController`:
     - [ ] Test email validation with valid, invalid, and missing formats.
     - [ ] Verify that the controller returns status 201 for successful registration.

**2. Mandatory Integration Tests:**
   - `Registration Flow`:
     - [ ] Verify that a POST request to the `/api/v1/users/register` endpoint with valid data saves a new user in the database and returns status 201.
     - [ ] Verify that a request with duplicate email returns status 409 and does not modify the database.

**3. Edge Cases and Error Conditions to Test:**
   - [ ] Send malformed email (e.g., without @).
   - [ ] Send too short password (e.g., less than 8 characters).
   - [ ] Send malformed JSON payload.

**Test Acceptance Criteria**:
   - [ ] All tests described above are implemented and pass.
   - [ ] Test coverage for UserService is >= 80%.

## Definition of Done (DoD)
- [ ] Registration flow is implemented end-to-end.
- [ ] All acceptance criteria are satisfied with passing tests.
- [ ] Controller, service, and security configuration changes follow existing conventions.
- [ ] The task file is updated so downstream tasks can rely on the registration endpoint.

**Implementation Command**:
/specs:task-implementation --lang=spring --task="docs/specs/001-user-auth/tasks/TASK-001.md"

Example 2: E-commerce Checkout (TypeScript)

/specs:spec-to-tasks --lang=typescript docs/specs/005-checkout-flow/

Example 3: API Integration (Python)

/specs:spec-to-tasks --lang=python docs/specs/010-payment-integration/

Example 4: Full Workflow (after task generation)

# Step 1: Generate tasks from specification
/specs:spec-to-tasks --lang=nestjs docs/specs/003-notification-system/

# Step 2: Implement tasks in dependency order
/specs:task-implementation --lang=nestjs --task="docs/specs/003-notification-system/tasks/TASK-001.md"
/specs:task-implementation --lang=nestjs --task="docs/specs/003-notification-system/tasks/TASK-002.md"

Integration with devkit.task-implementation

The task list generated by this command feeds directly into /specs:task-implementation:

# After generating tasks, implement each one:

# Option 1: Implement all tasks (sequentially)
/specs:task-implementation --lang=spring --task="docs/specs/001-user-auth/tasks/TASK-001.md"
# (complete, then)
/specs:task-implementation --lang=spring --task="docs/specs/001-user-auth/tasks/TASK-002.md"
# ...

# Option 2: Implement tasks in dependency order
# Start with tasks that have no dependencies
# Progress through the dependency graph

# Option 3: Pick specific task to work on
/specs:task-implementation --lang=spring --task="docs/specs/001-user-auth/tasks/TASK-003.md"

Todo Management

Throughout the process, maintain a todo list like:

[ ] Phase 1: Specification Analysis
[ ] Phase 1.5: Architecture & Ontology Definition
[ ] Phase 2: Requirement Extraction
[ ] Phase 3: Codebase Analysis
[ ] Phase 4: Technical Task Decomposition (including e2e and cleanup tasks)
[ ] Phase 5: Task List Generation
[ ] Phase 5.5: Spec Size Check (reject if >15 tasks and recommend brainstorm)
[ ] Phase 6: Review and Confirmation
[ ] Phase 7: Summary

Update the status as you progress through each phase.

CRITICAL: Phase 4 MUST generate:

  1. Implementation tasks (based on requirements)
  2. One e2e test task (depends on all implementation tasks)
  3. One cleanup task (depends on e2e test task, uses specs-code-cleanup skill)

Phase 5.5 (Spec Size Check): If >15 implementation tasks detected:

  1. STOP task generation immediately
  2. Present warning message explaining the spec is too large
  3. Recommend returning to /specs:brainstorm to split the idea into 2+ specifications
  4. Offer user option to continue anyway (at their own risk)
  5. If user chooses brainstorm: abort and suggest /specs:brainstorm command
  6. If user chooses continue: log warning in summary and proceed

Task Frontmatter Standardization

All task files follow a standardized frontmatter schema defined in hooks/task_schema.py. This ensures consistent metadata across all tasks.

Standard Status Workflow

Tasks use a standardized status workflow with automatic date tracking:

pending → in_progress → implemented → reviewed → completed
              ↓
          blocked (can return to in_progress)
StatusDescriptionDates Set
pendingInitial state, ready to startNone
in_progressWork has startedstarted_date
implementedCoding complete, awaiting reviewimplemented_date
reviewedReview passed, awaiting cleanupreviewed_date
completedCleanup done, fully completecompleted_date, cleanup_date
supersededReplaced by other tasksNone
optionalNot required for featureNone
blockedCannot proceedNone

Auto-Status Management

Task status is automatically managed by Claude Code hooks:

User ActionAutomatic Status Update
Edit task file, check AC boxespendingin_progressimplemented
Check all DoD boxesimplementedreviewed
Add Cleanup Summary sectionreviewedcompleted

How it works:

  • Hooks monitor TASK-*.md files on every save
  • Checkboxes are analyzed to determine progress
  • Frontmatter status and date fields update automatically
  • No manual status management needed

Manual override (if needed): Simply edit the YAML frontmatter directly:

---
status: blocked  # or any valid status
---

Valid statuses: pending, in_progress, implemented, reviewed, completed, superseded, optional, blocked


Note: This command follows the "divide et impera" (divide and conquer) principle — splitting complex problems into simpler, manageable tasks. Each task can be implemented independently, with clear dependencies and acceptance criteria.

plugins

CHANGELOG.md

context7.json

CONTRIBUTING.md

README_CN.md

README_ES.md

README_IT.md

README.md

tessl.json

tile.json