Comprehensive developer toolkit providing reusable skills for Java/Spring Boot, TypeScript/NestJS/React/Next.js, Python, PHP, AWS CloudFormation, AI/RAG, DevOps, and more.
90
90%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Converts a functional specification into a list of executable, trackable tasks. This is the bridge between WHAT (specification) and HOW (implementation).
This command reads a functional specification generated by /specs:brainstorm and converts it into atomic, executable tasks.
Input: docs/specs/[id]/YYYY-MM-DD--feature-name.md
Output:
docs/specs/[id]/YYYY-MM-DD--feature-name--tasks.mddocs/specs/[id]/tasks/TASK-XXX.mdEach task includes:
Idea → Functional Specification → Architecture & Ontology Definition → Tasks → Implementation → Review → Code Cleanup → Done
(brainstorm) (this: Phase 1.5) (this) (task-implementation) (task-review) (code-cleanup)CRITICAL: If task decomposition produces more than 15 implementation tasks, the specification is too large for a single implementation cycle and MUST be rejected:
Detect oversized spec: After Phase 4 (Task Decomposition), count implementation tasks (excluding e2e and cleanup tasks)
If > 15 tasks:
Specification Too Large
This specification would generate X implementation tasks, which exceeds the maximum of 15.
The scope is too large for a single implementation cycle.
Recommended action:
1. Return to /specs:brainstorm
2. Split your idea into 2 or more smaller, focused specifications
3. Run /specs:spec-to-tasks for each specification separately
Example split strategy:
- Spec Part 1: Core functionality (must-have for initial release)
- Spec Part 2: Extended features (phase 2 or nice-to-have)
- Spec Part 3: Additional capabilities (future iterations)
This ensures each specification has a clear functional scope and manageable implementation scope./specs:brainstorm with the split ideaIf <= 15 tasks: Proceed normally with task generation
# Basic usage - specify spec file or folder
/specs:spec-to-tasks docs/specs/001-hotel-search-aggregation/
/specs:spec-to-tasks docs/specs/001-hotel-search-aggregation/2026-03-07--hotel-search-aggregation.md
# With language specification
/specs:spec-to-tasks --lang=spring docs/specs/001-user-auth/
/specs:spec-to-tasks --lang=typescript docs/specs/001-user-auth/
/specs:spec-to-tasks --lang=nestjs docs/specs/001-user-auth/
/specs:spec-to-tasks --lang=react docs/specs/001-user-auth/
/specs:spec-to-tasks --lang=python docs/specs/001-user-auth/
/specs:spec-to-tasks --lang=general docs/specs/001-user-auth/| Argument | Required | Description |
|---|---|---|
--lang | Recommended | Target language/framework: java, spring, typescript, nestjs, react, python, php, general. Required for codebase analysis and technical task generation |
spec-file | No | Path to spec file or spec folder (e.g., docs/specs/001-feature-name/, docs/specs/001-feature-name/2026-03-07--feature-name.md, or legacy *-specs.md) |
The command will automatically gather context information when needed:
You are converting a functional specification into executable tasks. Follow a systematic approach: analyze requirements, identify dependencies, generate atomic tasks, and create a trackable task list.
Goal: Read and understand the functional specification
Input: $ARGUMENTS (spec file or folder path)
Actions:
Create todo list with all phases
Parse $ARGUMENTS to extract:
--lang parameter (language/framework for implementation)spec-path (path to spec file or folder)Determine the spec folder:
YYYY-MM-DD--feature-name.md--tasks.md, decision-log.md, traceability-matrix.md, user-request.md, and brainstorming-notes.mdRead the resolved functional specification file
CRITICAL: Look for user context files in the spec folder:
user-request.md - Original user request (from brainstorming - PRIMARY)brainstorming-notes.md - Notes from brainstorming session (SECONDARY)Extract the spec ID from folder name (e.g., 001-hotel-search-aggregation)
Verify the specification exists and is valid
If file not found:
Quality Pre-Check (Soft Gate):
## Clarifications in the spec file (added by spec-review)/devkit.spec-review docs/specs/[id]/)Goal: Ensure the project-level architecture and ontology documents exist and are consistent before generating tasks. This phase bridges the gap between WHAT (functional specification) and HOW (technical tasks).
Context: The architecture and ontology documents live at the docs/specs/ level (shared across all specifications):
docs/specs/architecture.md — Formalizes technological and infrastructural choicesdocs/specs/ontology.md — Establishes common domain language (Ubiquitous Language)docs/specs/architecture.md)Check if docs/specs/architecture.md exists:
If the file does NOT exist:
Inform the user: "No project architecture document found. Before generating tasks, we need to define the project architecture."
Use AskUserQuestion to gather architecture information through targeted questions:
Question 1 — Software Stack:
What is the primary technology stack for this project?--lang parameter if provided):
Question 2 — Data Architecture:
What database and data management approach does the project use?Question 3 — Infrastructure:
What hosting and infrastructure approach is used?Create docs/specs/architecture.md using the gathered information:
# Project Architecture
**Created**: [current date YYYY-MM-DD]
**Last Updated**: [current date YYYY-MM-DD]
## Software Stack
| Component | Technology | Notes |
|-----------|-----------|-------|
| Language | [e.g., TypeScript] | [version if known] |
| Framework | [e.g., NestJS] | [version if known] |
| Key Libraries | [e.g., Drizzle ORM, Passport] | |
## Data Architecture
| Component | Technology | Notes |
|-----------|-----------|-------|
| Primary Database | [e.g., PostgreSQL] | |
| Caching | [e.g., Redis, none] | |
| ORM / Data Access | [e.g., Drizzle, Hibernate] | |
| Migrations | [e.g., Flyway, Drizzle Kit] | |
## Infrastructure
| Component | Technology | Notes |
|-----------|-----------|-------|
| Hosting | [e.g., AWS ECS] | |
| CI/CD | [e.g., GitHub Actions] | |
| Containerization | [e.g., Docker] | |
| Orchestration | [e.g., Kubernetes, none] | |
## Architecture Decisions
> Significant modifications to this architecture document must be tracked
> via **ADR (Architecture Decision Records)** using the `adr-drafting` skill.
>
> ADR location: `docs/architecture/adr/` (or project-specific convention)Log the creation and present to the user for final confirmation
If the file ALREADY exists:
docs/specs/architecture.mdLoaded project architecture:
- Stack: [language/framework]
- Database: [database]
- Infrastructure: [hosting]--lang parameter conflicts with the architecture document (e.g., --lang=spring but architecture says TypeScript), warn the user via AskUserQuestion:
--lang parameter ([lang]) doesn't match the architecture document ([architecture stack]). Which should I use?"docs/specs/ontology.md)Check if docs/specs/ontology.md exists:
If the file does NOT exist:
I identified the following domain terms from the specification:
- [Term 1]: [proposed definition]
- [Term 2]: [proposed definition]
- ...
Should I create the project ontology with these terms? You can also add or adjust terms.docs/specs/ontology.md:
# Project Ontology — Ubiquitous Language
**Created**: [current date YYYY-MM-DD]
**Last Updated**: [current date YYYY-MM-DD]
## Domain Glossary
| Term | Definition | Bounded Context |
|------|-----------|-----------------|
| [Term 1] | [Definition] | [Context where this term applies] |
| [Term 2] | [Definition] | [Context where this term applies] |
## Bounded Contexts
| Context | Description | Key Terms |
|---------|-------------|-----------|
| [Context 1] | [Description of this bounded context] | [Key terms] |
## Conceptual Mapping
[Relationships between key domain entities]If the file ALREADY exists:
docs/specs/ontology.mdLast Updated dateAfter both documents are processed, produce a brief summary:
Architecture & Ontology Context:
- Architecture: [loaded/created] — [stack summary]
- Ontology: [loaded/created/skipped] — [N terms in glossary]
- Both documents will inform task generation in Phase 4.Goal: Extract and organize requirements from the specification
Actions:
Analyze the specification for:
CRITICAL: Include technical requirements from context files:
Group related requirements:
CRITICAL: Verify against original user request:
user-request.md content already read in Phase 1Present the extracted requirements structure (including technical requirements) to user for confirmation
Assign unique REQ-IDs to each extracted requirement:
REQ-001: [User story / requirement text]
REQ-002: [Business rule / requirement text]
REQ-003: [Acceptance criterion / requirement text]
...Goal: Check if cached codebase analysis exists from previous runs
Prerequisite: Requires --lang parameter and spec folder path
Actions:
Check for existing Knowledge Graph:
knowledge-graph.json in the spec folderIf Knowledge Graph exists:
metadata.updated_at timestampcurrent_time - updated_atPresent summary to user:
Found cached codebase analysis from X days ago:
- Y architectural patterns (Repository, Service Layer, etc.)
- Z components (N controllers, M services, K repositories)
- Q API endpoints documented
- Technology stack: [framework] [version]
The analysis is [fresh/getting stale/old].Choose reuse strategy automatically unless the case is borderline:
Based on the chosen strategy:
If using cached KG:
query knowledge-graph [spec-folder] patternsquery knowledge-graph [spec-folder] componentsquery knowledge-graph [spec-folder] apisCheck and Load Global Knowledge Graph (if exists):
docs/specs/.global-knowledge-graph.jsonGoal: Understand existing codebase to generate technically accurate tasks
Prerequisite: This phase requires --lang parameter to select appropriate agents
Actions:
--lang parameter, select appropriate codebase exploration agent:| Language | Agent |
|---|---|
java / spring | developer-kit-java:java-software-architect-review |
typescript / nestjs | developer-kit-typescript:typescript-software-architect-review |
react | developer-kit-typescript:react-software-architect-review |
python | developer-kit-python:python-software-architect-expert |
php | developer-kit-php:php-software-architect-expert |
general | developer-kit:general-code-explorer |
For java / spring:
Explore the Java/Spring Boot codebase to understand:
1. **Project Structure**:
- Package organization (domain-driven, layered, etc.)
- Build configuration (Maven/Gradle, pom.xml/build.gradle)
- Main application class and entry points
2. **Spring Patterns**:
- Spring Data JPA repositories and entity mapping
- Spring Security configuration and auth patterns
- REST controller conventions (@RestController, @RequestMapping)
- Service layer patterns (@Service, transaction management)
- Configuration properties (@ConfigurationProperties)
3. **Data Layer**:
- Entity/DTO patterns
- Database migrations (Flyway, Liquibase)
- ORM patterns (Hibernate)
4. **Testing Patterns**:
- Test directory structure
- Testing conventions (JUnit 5, Mockito)
- Integration test setup
Provide a summary that will inform task generation with Spring-specific context.For typescript / nestjs:
Explore the TypeScript/NestJS codebase to understand:
1. **Project Structure**:
- Module organization
- TypeScript configuration (tsconfig.json)
- NestJS module structure
2. **NestJS Patterns**:
- Controller conventions (@Controller, @Get, @Post, etc.)
- Service layer patterns (@Injectable, providers)
- Module organization (@Module)
- Dependency injection setup
- Guards and interceptors
3. **Data Access**:
- ORM usage (TypeORM, Drizzle, Prisma)
- Repository patterns
- Database migrations
4. **Testing Patterns**:
- Jest configuration
- Unit vs integration test structure
Provide a summary that will inform task generation with NestJS-specific context.For react:
Explore the React codebase to understand:
1. **Project Structure**:
- App organization (Next.js, Remix, or CRA/Vite)
- Routing structure
- Component directory layout
2. **React Patterns**:
- Component patterns (functional, hooks)
- State management (Context, Redux, Zustand, etc.)
- API communication (React Query, SWR, fetch)
- Form handling patterns
3. **Styling**:
- CSS approach (CSS modules, Tailwind, styled-components)
- Component library usage
4. **Testing Patterns**:
- Testing library (Jest, Vitest, React Testing Library)
- Component testing conventions
Provide a summary that will inform task generation with React-specific context.For python:
Explore the Python codebase to understand:
1. **Project Structure**:
- Package organization
- requirements.txt, setup.py, or pyproject.toml
- Entry points (main.py, __main__.py)
2. **Python Patterns**:
- Web framework (Django, FastAPI, Flask)
- Data models (SQLAlchemy, Pydantic, Django ORM)
- API patterns (REST, GraphQL)
- Authentication patterns
3. **Testing Patterns**:
- pytest configuration
- Test directory structure
- Mocking conventions
Provide a summary that will inform task generation with Python-specific context.For php:
Explore the PHP codebase to understand:
1. **Project Structure**:
- Composer-based project organization
- Laravel directory structure or custom MVC
2. **PHP Patterns**:
- Framework conventions (Laravel, Symfony)
- ORM usage (Eloquent, Doctrine)
- Controller patterns
- Routing and middleware
3. **Testing Patterns**:
- PHPUnit configuration
- Feature vs unit test structure
Provide a summary that will inform task generation with PHP-specific context.For general:
Explore the codebase to understand:
1. **Project Structure**:
- Main directories and their purpose
- Configuration files (package.json, pom.xml, requirements.txt, etc.)
- Entry points and main modules
2. **Existing Patterns**:
- Data models/schemas used
- API patterns (REST, GraphQL, etc.)
- Authentication/authorization patterns
- Database access patterns (ORM, raw queries, etc.)
- Error handling patterns
- Logging and monitoring approaches
3. **Technology Stack**:
- Frameworks and libraries used
- Database systems
- External service integrations
- Build and deployment tools
4. **Integration Points**:
- Existing APIs the new feature must integrate with
- Shared utilities or helper functions
- Common components or services
- Configuration management
5. **Code Organization**:
- Layered architecture (if any)
- Module boundaries
- Dependency injection patterns
- Testing patterns and conventions
Provide a comprehensive summary that will inform task generation.Goal: Persist agent discoveries into the Knowledge Graph for future reuse
Prerequisite: Phase 3 (Codebase Analysis) must have completed
Actions:
Extract structured findings from agent analysis:
patterns.architectural: Design patterns discovered (Repository, Service Layer, etc.)patterns.conventions: Coding conventions (naming, testing, etc.)components: Code components identified (controllers, services, repositories, entities)apis.internal: REST endpoints and API structureapis.external: External service integrationsintegration_points: Database, cache, message queues, etc.Construct KG update object:
{
"metadata": {
"spec_id": "[extracted from folder]",
"feature_name": "[extracted from folder]",
"updated_at": "[current ISO timestamp]",
"analysis_sources": [
{
"agent": "[agent-type-used]",
"timestamp": "[current ISO timestamp]",
"focus": "codebase analysis for task generation"
}
]
},
"codebase_context": {
"project_structure": { /* from agent analysis */ },
"technology_stack": { /* from agent analysis */ }
},
"patterns": {
"architectural": [ /* patterns discovered */ ],
"conventions": [ /* conventions identified */ ]
},
"components": {
"controllers": [ /* controllers found */ ],
"services": [ /* services found */ ],
"repositories": [ /* repositories found */ ],
"entities": [ /* entities found */ ],
"dtos": [ /* DTOs found */ ]
},
"apis": {
"internal": [ /* endpoints discovered */ ],
"external": [ /* external integrations */ ]
},
"integration_points": [ /* databases, caches, etc. */ ]
}Update Knowledge Graph using spec-quality command:
/specs:spec-quality [spec-folder] --update-kg-onlyknowledge-graph.json with discovered patternsmetadata.updated_at and metadata.analysis_sourcesLog and report:
Knowledge Graph updated via spec-quality:
- X architectural patterns documented
- Y coding conventions identified
- Z components catalogued (N controllers, M services, K repositories)
- Q API endpoints documented
- R integration points mapped
Saved to: docs/specs/[ID]/knowledge-graph.jsonVerify update:
Note: If user chose to use cached KG in Phase 2.5, skip this phase and proceed directly to Phase 4.
Goal: Break down requirements into atomic, executable tasks
Actions:
1.1. If Architecture context is available (from Phase 1.5):
docs/specs/architecture.mdadr-drafting skill1.2. If Ontology context is available (from Phase 1.5):
docs/specs/ontology.md consistently in task titles, descriptions, and acceptance criteriadocs/specs/ontology.md and update the Last Updated dateFor each requirement group, create one or more tasks:
For each task, define:
Map dependencies explicitly:
Validate dependencies before generating files:
| Task ID | Title | Dependencies |
|---|---|---|
| TASK-001 | [Title] | None |
| TASK-002 | [Title] | TASK-001 |
| TASK-003 | [Title] | TASK-001, TASK-002 |
| ... | ... | ... |
Identify Test Requirements for Each Task: For each identified task, you must now precisely and mandatorily define what needs to be tested. This analysis will guide the generation of the "Test Instructions" section in the task file.
Analyze involved classes/components: For each file that the task will create or modify, determine its complexity level and testing importance.
Define behaviors to test: For each high-priority component, list specific test scenarios. Do not generate code, but describe the behavior.
register(userData) method in UserService calls UserRepository.save() only if the email is unique and valid."calculateTotal(price, tax, discount) function returns the correct value for valid inputs, for zero taxes, and for maximum discounts."/api/register endpoint with valid data saves a new user in the database and returns status 201."Suggest test files to create: For each source file requiring tests, indicate the corresponding test file according to language conventions.
UserService.java → UserServiceTest.javauser.service.ts → user.service.spec.tsuser_service.py → test_user_service.pyLink Tests to Acceptance Criteria: Ensure that for each functional acceptance criterion, there is at least one test scenario that verifies it. This step is critical for guaranteeing traceability.
Present task structure to the user only if major restructuring, optional tasks, or scope gaps were detected. Otherwise generate the files directly and summarize the resulting plan.
CRITICAL: Add Mandatory Final Tasks — After generating all implementation tasks, ALWAYS add these two final tasks:
TASK-N-1: End-to-End (e2e) Test Task (where N is the next task number)
[test-dir]/[feature-name].e2e.spec.ts (TypeScript/NestJS)[test-dir]/[feature-name].e2e.test.tsx (React)[test-dir]/[FeatureName]E2ETest.java (Java/Spring)[test-dir]/test_[feature_name]_e2e.py (Python)/specs:task-implementation --lang=[language] --task="docs/specs/[id]/tasks/TASK-N-1.md"TASK-N: Code Cleanup & Workspace Hygiene Task (FINAL task)
console.log, System.out.println, print( statements)// DEBUG:, /* TODO: remove */)/specs:code-cleanup --lang=[language] --task="docs/specs/[id]/tasks/TASK-N.md"specs-code-cleanup skill. Reference the skill documentation for exact procedures.Verify task count and spec size:
Goal: Generate the task list markdown file and individual task files with technical details
Actions:
Generate a unique task ID for each task (e.g., TASK-001, TASK-002)
Extract feature name from folder (remove ID prefix, e.g., 001-hotel-search-aggregation → hotel-search-aggregation)
Create tasks directory: docs/specs/[id]/tasks/
For each task, create an individual task file with technical details from codebase analysis:
IMPORTANT: Always include test files in "Files to Create" section for any class that contains business logic, state management, validation, or complex behavior. Test files should be listed alongside source files with clear descriptions of what to test (e.g., "test state transitions", "test validation logic").
---
id: TASK-XXX
title: "[Task Title]"
spec: [resolved spec file path]
lang: [java|spring|typescript|nestjs|react|python|general]
status: pending
dependencies: [TASK-YYY if applicable]
---
# TASK-XXX: [Task Title]
**Functional Description**: [Functional description of what this task covers]
## Acceptance Criteria
- [ ] [Functional criterion 1]
- [ ] [Functional criterion 2]
- [ ] [Functional criterion 3 if needed]
## Definition of Ready (DoR)
Before starting this task, ensure:
- [ ] Dependencies are completed or explicitly marked as not required.
- [ ] Technical context, patterns, and integration points are understood.
- [ ] Files to create/modify are identified and accessible.
- [ ] Required tooling, commands, and local prerequisites are available.
- [ ] Open questions or blockers have been resolved.
## Technical Context (from Codebase Analysis)
- **Existing Patterns to Follow**: [patterns from codebase analysis]
- **APIs to Integrate With**: [existing APIs or services]
- **Shared Components**: [existing utilities, services, or modules to use]
- **Conventions**: [coding conventions, naming, structure, framework-specific patterns]
- **Architecture Reference**: [relevant entries from docs/specs/architecture.md — stack, data layer, infrastructure]
- **Domain Terms**: [relevant terms from docs/specs/ontology.md — use canonical names consistently]
## Implementation Details (File names only, no code)
**Files to Create**:
- `[path/source/1]` - [brief description of its purpose]
- `[path/source/2]` - [brief description of its purpose]
- `[path/test/1]` - [e.g., user.service.spec.ts]
- `[path/test/2]` - [e.g., user.controller.integration.spec.ts]
**Files to Modify** (if applicable):
- `[path/existing/1]` - [what modifications are needed]
## Test Instructions
This section describes **what** to test, not **how** to implement test code.
**1. Mandatory Unit Tests:**
- `[Source Class/File Name 1]`:
- [ ] Verify that [method/unit] correctly handles [success scenario].
- [ ] Verify that [method/unit] throws an exception/error when [error scenario].
- [ ] Verify that the [specific business rule] logic works as described in the specification.
- `[Source Class/File Name 2]`:
- [ ] Test validation of [specific field] with valid, invalid, and borderline values.
**2. Mandatory Integration Tests:**
- `[Flow/Component Name]`:
- [ ] Verify that the `[API endpoint]` endpoint with valid data correctly interacts with the database and returns the expected response (e.g., status 201, correct body).
- [ ] Verify that a call to the `[API endpoint]` endpoint with invalid data **does not** modify the database state and returns an appropriate error (e.g., status 400).
**3. Edge Cases and Error Conditions to Test:**
- [ ] Send missing or malformed data.
- [ ] Simulate timeout or failure of an external service.
- [ ] Test race conditions (if relevant, e.g., double booking).
- [ ] Test with high data loads or boundary values (e.g., maximum length strings).
**Test Acceptance Criteria**:
- [ ] All tests described above are implemented and pass.
- [ ] Test coverage for classes with business logic is >= 80%.
## Definition of Done (DoD)
This task is complete when:
- [ ] Functional description is implemented end-to-end.
- [ ] All acceptance criteria are met with evidence in code or tests.
- [ ] Tests in this task are implemented or updated and passing.
- [ ] Required files are created or modified following the documented technical context.
- [ ] Any handoff expectations for dependent tasks are documented.
**Dependencies**: [TASK-YYY if applicable, otherwise "None"]
**Implementation Command**:
/specs:task-implementation --lang=[language] --task="docs/specs/[id]/tasks/TASK-XXX.md"docs/specs/[id]/YYYY-MM-DD--feature-name--tasks.md# Task List: [Feature Name]
**Specification**: [resolved spec file path]
**Generated**: [current date]
**Language**: [language]
## Codebase Analysis Summary
- **Project Structure**: [summary from codebase analysis]
- **Key Patterns**: [patterns identified]
- **Integration Points**: [APIs/services to integrate with]
## Task Index
| Task ID | Title | Technical Focus | Status | Dependencies |
|---------|-------|-----------------|--------|--------------|
| [TASK-001](tasks/TASK-001.md) | Task title | [files/components] | [ ] | - |
| [TASK-002](tasks/TASK-002.md) | Task title | [files/components] | [ ] | TASK-001 |
| ... | ... | ... | ... | ... |
| [TASK-N-1](tasks/TASK-N-1.md) | End-to-End Testing | [e2e test files] | [ ] | TASK-001, TASK-002, ... |
| [TASK-N](tasks/TASK-N.md) | Code Cleanup & Hygiene | [all modified files] | [ ] | TASK-N-1 |
**Legend**:
- [E2E] = End-to-end test task (validates entire feature workflow)
- [CLEANUP] = Code cleanup task (uses specs-code-cleanup skill)
## Tasks
Each task has its own detailed file with technical context:
- [TASK-001](tasks/TASK-001.md): Task title
- [TASK-002](tasks/TASK-002.md): Task title
- ...
- [TASK-N-1](tasks/TASK-N-1.md): End-to-End Testing (validates entire feature)
- [TASK-N](tasks/TASK-N.md): Code Cleanup & Workspace Hygiene (final cleanup)
## Task Type Summary
- **Implementation Tasks** (TASK-001 to TASK-N-2): Core feature implementation
- **E2E Test Task** (TASK-N-1): End-to-end testing of complete workflow
- **Cleanup Task** (TASK-N): Final code quality and hygiene cleanupGoal: Generate traceability matrix mapping requirements to tasks
Prerequisite: Phase 2 (Requirement Extraction with REQ-IDs) and Phase 4 (Technical Task Decomposition) completed
Actions:
Map REQ-IDs to tasks:
Generate traceability matrix file: Create docs/specs/[id]/traceability-matrix.md:
# Traceability Matrix: [Feature Name]
**Spec**: [resolved spec file path]
**Generated**: YYYY-MM-DD
**Last Updated**: YYYY-MM-DD
## Coverage Summary
- **Requirements**: N total
- **Covered by Tasks**: N/N (100%)
- **With Tests**: N/N (X%)
- **Implemented**: N/N (X%)
## Matrix
| REQ ID | Requirement | Task(s) | Test Files | Code Files | Status |
|--------|-------------|---------|------------|------------|--------|
| REQ-001 | User can search by destination | TASK-001, TASK-003 | - | - | Pending |
| REQ-002 | Results paginated | TASK-005 | - | - | Pending |Initialize matrix columns:
Calculate coverage summary:
Goal: Verify the task list quality
Actions:
Present the generated task structure to the user:
docs/specs/[id]/YYYY-MM-DD--feature-name--tasks.mddocs/specs/[id]/tasks/TASK-XXX.mdAsk for confirmation via AskUserQuestion:
If modifications needed, return to Phase 3
Goal: Document what was accomplished
Actions:
Mark all todos complete
Summarize:
docs/specs/architecture.md — [stack summary]docs/specs/ontology.md — [N terms]docs/specs/[id]/YYYY-MM-DD--feature-name--tasks.mddocs/specs/[id]/tasks/TASK-XXX.md (with technical context)docs/specs/[id]/tasks/TASK-N-1.md (depends on all implementation tasks)docs/specs/[id]/tasks/TASK-N.md (depends on e2e test task, uses specs-code-cleanup skill)Provide example commands for implementing tasks:
# Example: Implement a specific task
/specs:task-implementation --lang=[language] --task="docs/specs/001-feature-name/tasks/TASK-001.md"
# Example: List available tasks in the folder
ls docs/specs/001-feature-name/tasks/When tasks have dependencies, the workflow is:
docs/specs/001-user-auth/tasks/TASK-001.md (no dependencies)
↓
docs/specs/001-user-auth/tasks/TASK-002.md (depends on TASK-001)
↓
docs/specs/001-user-auth/tasks/TASK-003.md (depends on TASK-001)
↓
docs/specs/001-user-auth/tasks/TASK-004.md (depends on TASK-002)
↓
docs/specs/001-user-auth/tasks/TASK-005.md (depends on TASK-003, TASK-004) [Implementation]
↓
docs/specs/001-user-auth/tasks/TASK-006.md (E2E Tests - depends on TASK-001 to TASK-005)
↓
docs/specs/001-user-auth/tasks/TASK-007.md (Cleanup - depends on TASK-006)The --lang parameter affects only the Implementation Command in each task file:
| Language | Implementation Command |
|---|---|
java | devkit.task-implementation --lang=java --task="docs/specs/[id]/tasks/TASK-XXX.md" |
spring | devkit.task-implementation --lang=spring --task="docs/specs/[id]/tasks/TASK-XXX.md" |
typescript | devkit.task-implementation --lang=typescript --task="docs/specs/[id]/tasks/TASK-XXX.md" |
nestjs | devkit.task-implementation --lang=nestjs --task="docs/specs/[id]/tasks/TASK-XXX.md" |
react | devkit.task-implementation --lang=react --task="docs/specs/[id]/tasks/TASK-XXX.md" |
python | devkit.task-implementation --lang=python --task="docs/specs/[id]/tasks/TASK-XXX.md" |
php | devkit.task-implementation --lang=php --task="docs/specs/[id]/tasks/TASK-XXX.md" |
general | devkit.task-implementation --lang=general --task="docs/specs/[id]/tasks/TASK-XXX.md" |
# Convert specification to tasks
/specs:spec-to-tasks --lang=spring --task=docs/specs/001-user-auth/Output structure:
docs/specs/001-user-auth/
├── 2026-03-07--user-auth-specs.md
├── 2026-03-07--user-auth--tasks.md
└── tasks/
├── TASK-001.md (User registration endpoint)
├── TASK-002.md (Login endpoint)
├── TASK-003.md (Password reset)
├── TASK-004.md (JWT token management)
├── TASK-005.md (Session management)
├── TASK-006.md (End-to-End Testing)
└── TASK-007.md (Code Cleanup & Workspace Hygiene)Sample task file (TASK-001.md):
---
id: TASK-001
title: "User registration endpoint"
spec: docs/specs/001-user-auth/2026-03-07--user-auth-specs.md
lang: spring
dependencies: []
---
# TASK-001: User registration endpoint
**Functional Description**: Implement user registration with email validation
## Acceptance Criteria
- [ ] Users can register with a valid email and password.
- [ ] Duplicate email registrations are rejected.
- [ ] Passwords are persisted only after encoding.
## Definition of Ready (DoR)
- [ ] No prerequisite tasks are pending.
- [ ] Existing registration patterns and security conventions are understood.
- [ ] Required files and Spring test tooling are available locally.
- [ ] Validation and duplicate-email behavior are clear from the specification.
## Technical Context (from Codebase Analysis)
- **Existing Patterns to Follow**: REST controllers in src/main/java/.../controller/
- **APIs to Integrate With**: Existing UserRepository
- **Conventions**: @RestController, @Valid annotations
## Implementation Details (File names only, no code)
**Files to Create**:
- `src/main/java/.../controller/AuthController.java` - Controller for registration
- `src/main/java/.../service/UserService.java` - Business logic service
- `src/test/java/.../controller/AuthControllerTest.java` - Controller tests
- `src/test/java/.../service/UserServiceTest.java` - Service tests
**Files to Modify**:
- `src/main/java/.../config/SecurityConfig.java` - Add public endpoint
## Test Instructions
This section describes **what** to test, not **how** to implement test code.
**1. Mandatory Unit Tests:**
- `UserService`:
- [ ] Verify that the `register(userData)` method calls `UserRepository.save()` only if the email is unique.
- [ ] Verify that `EmailAlreadyExistsException` is thrown when the email is already registered.
- [ ] Verify that the password is encoded before saving.
- `AuthController`:
- [ ] Test email validation with valid, invalid, and missing formats.
- [ ] Verify that the controller returns status 201 for successful registration.
**2. Mandatory Integration Tests:**
- `Registration Flow`:
- [ ] Verify that a POST request to the `/api/v1/users/register` endpoint with valid data saves a new user in the database and returns status 201.
- [ ] Verify that a request with duplicate email returns status 409 and does not modify the database.
**3. Edge Cases and Error Conditions to Test:**
- [ ] Send malformed email (e.g., without @).
- [ ] Send too short password (e.g., less than 8 characters).
- [ ] Send malformed JSON payload.
**Test Acceptance Criteria**:
- [ ] All tests described above are implemented and pass.
- [ ] Test coverage for UserService is >= 80%.
## Definition of Done (DoD)
- [ ] Registration flow is implemented end-to-end.
- [ ] All acceptance criteria are satisfied with passing tests.
- [ ] Controller, service, and security configuration changes follow existing conventions.
- [ ] The task file is updated so downstream tasks can rely on the registration endpoint.
**Implementation Command**:
/specs:task-implementation --lang=spring --task="docs/specs/001-user-auth/tasks/TASK-001.md"/specs:spec-to-tasks --lang=typescript docs/specs/005-checkout-flow//specs:spec-to-tasks --lang=python docs/specs/010-payment-integration/# Step 1: Generate tasks from specification
/specs:spec-to-tasks --lang=nestjs docs/specs/003-notification-system/
# Step 2: Implement tasks in dependency order
/specs:task-implementation --lang=nestjs --task="docs/specs/003-notification-system/tasks/TASK-001.md"
/specs:task-implementation --lang=nestjs --task="docs/specs/003-notification-system/tasks/TASK-002.md"The task list generated by this command feeds directly into /specs:task-implementation:
# After generating tasks, implement each one:
# Option 1: Implement all tasks (sequentially)
/specs:task-implementation --lang=spring --task="docs/specs/001-user-auth/tasks/TASK-001.md"
# (complete, then)
/specs:task-implementation --lang=spring --task="docs/specs/001-user-auth/tasks/TASK-002.md"
# ...
# Option 2: Implement tasks in dependency order
# Start with tasks that have no dependencies
# Progress through the dependency graph
# Option 3: Pick specific task to work on
/specs:task-implementation --lang=spring --task="docs/specs/001-user-auth/tasks/TASK-003.md"Throughout the process, maintain a todo list like:
[ ] Phase 1: Specification Analysis
[ ] Phase 1.5: Architecture & Ontology Definition
[ ] Phase 2: Requirement Extraction
[ ] Phase 3: Codebase Analysis
[ ] Phase 4: Technical Task Decomposition (including e2e and cleanup tasks)
[ ] Phase 5: Task List Generation
[ ] Phase 5.5: Spec Size Check (reject if >15 tasks and recommend brainstorm)
[ ] Phase 6: Review and Confirmation
[ ] Phase 7: SummaryUpdate the status as you progress through each phase.
CRITICAL: Phase 4 MUST generate:
Phase 5.5 (Spec Size Check): If >15 implementation tasks detected:
/specs:brainstorm commandAll task files follow a standardized frontmatter schema defined in hooks/task_schema.py. This ensures consistent metadata across all tasks.
Tasks use a standardized status workflow with automatic date tracking:
pending → in_progress → implemented → reviewed → completed
↓
blocked (can return to in_progress)| Status | Description | Dates Set |
|---|---|---|
pending | Initial state, ready to start | None |
in_progress | Work has started | started_date |
implemented | Coding complete, awaiting review | implemented_date |
reviewed | Review passed, awaiting cleanup | reviewed_date |
completed | Cleanup done, fully complete | completed_date, cleanup_date |
superseded | Replaced by other tasks | None |
optional | Not required for feature | None |
blocked | Cannot proceed | None |
Task status is automatically managed by Claude Code hooks:
| User Action | Automatic Status Update |
|---|---|
| Edit task file, check AC boxes | pending → in_progress → implemented |
| Check all DoD boxes | implemented → reviewed |
| Add Cleanup Summary section | reviewed → completed |
How it works:
TASK-*.md files on every savestatus and date fields update automaticallyManual override (if needed): Simply edit the YAML frontmatter directly:
---
status: blocked # or any valid status
---Valid statuses: pending, in_progress, implemented, reviewed, completed, superseded, optional, blocked
Note: This command follows the "divide et impera" (divide and conquer) principle — splitting complex problems into simpler, manageable tasks. Each task can be implemented independently, with clear dependencies and acceptance criteria.
docs
plugins
developer-kit-ai
developer-kit-aws
agents
docs
skills
aws
aws-cli-beast
aws-cost-optimization
aws-drawio-architecture-diagrams
aws-sam-bootstrap
aws-cloudformation
aws-cloudformation-auto-scaling
aws-cloudformation-bedrock
aws-cloudformation-cloudfront
aws-cloudformation-cloudwatch
aws-cloudformation-dynamodb
aws-cloudformation-ec2
aws-cloudformation-ecs
aws-cloudformation-elasticache
references
aws-cloudformation-iam
references
aws-cloudformation-lambda
aws-cloudformation-rds
aws-cloudformation-s3
aws-cloudformation-security
aws-cloudformation-task-ecs-deploy-gh
aws-cloudformation-vpc
references
developer-kit-core
agents
commands
skills
developer-kit-devops
developer-kit-java
agents
commands
docs
skills
aws-lambda-java-integration
aws-rds-spring-boot-integration
aws-sdk-java-v2-bedrock
aws-sdk-java-v2-core
aws-sdk-java-v2-dynamodb
aws-sdk-java-v2-kms
aws-sdk-java-v2-lambda
aws-sdk-java-v2-messaging
aws-sdk-java-v2-rds
aws-sdk-java-v2-s3
aws-sdk-java-v2-secrets-manager
clean-architecture
graalvm-native-image
langchain4j-ai-services-patterns
references
langchain4j-mcp-server-patterns
references
langchain4j-rag-implementation-patterns
references
langchain4j-spring-boot-integration
langchain4j-testing-strategies
langchain4j-tool-function-calling-patterns
langchain4j-vector-stores-configuration
references
qdrant
references
spring-ai-mcp-server-patterns
spring-boot-actuator
spring-boot-cache
spring-boot-crud-patterns
spring-boot-dependency-injection
spring-boot-event-driven-patterns
spring-boot-openapi-documentation
spring-boot-project-creator
spring-boot-resilience4j
spring-boot-rest-api-standards
spring-boot-saga-pattern
spring-boot-security-jwt
assets
references
scripts
spring-boot-test-patterns
spring-data-jpa
references
spring-data-neo4j
references
unit-test-application-events
unit-test-bean-validation
unit-test-boundary-conditions
unit-test-caching
unit-test-config-properties
references
unit-test-controller-layer
unit-test-exception-handler
references
unit-test-json-serialization
unit-test-mapper-converter
references
unit-test-parameterized
unit-test-scheduled-async
references
unit-test-service-layer
references
unit-test-utility-methods
unit-test-wiremock-rest-api
references
developer-kit-php
developer-kit-project-management
developer-kit-python
developer-kit-specs
commands
docs
hooks
test-templates
tests
skills
developer-kit-tools
developer-kit-typescript
agents
docs
hooks
rules
skills
aws-cdk
aws-lambda-typescript-integration
better-auth
clean-architecture
drizzle-orm-patterns
dynamodb-toolbox-patterns
references
nestjs
nestjs-best-practices
nestjs-code-review
nestjs-drizzle-crud-generator
nextjs-app-router
nextjs-authentication
nextjs-code-review
nextjs-data-fetching
nextjs-deployment
nextjs-performance
nx-monorepo
react-code-review
react-patterns
shadcn-ui
tailwind-css-patterns
tailwind-design-system
references
turborepo-monorepo
typescript-docs
typescript-security-review
zod-validation-utilities
references
github-spec-kit