CtrlK
BlogDocsLog inGet started
Tessl Logo

giuseppe-trisciuoglio/developer-kit

Comprehensive developer toolkit providing reusable skills for Java/Spring Boot, TypeScript/NestJS/React/Next.js, Python, PHP, AWS CloudFormation, AI/RAG, DevOps, and more.

89

Quality

89%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Risky

Do not use without reviewing

Overview
Quality
Evals
Security
Files

specs.task-implementation.mdplugins/developer-kit-specs/commands/

description:
Provides guided task implementation capability for executing specific tasks from a task list generated by spec-to-tasks. Use when implementing a specific task from a task list (use --task= prefix).
argument-hint:
[ --lang=java|spring|typescript|nestjs|react|python|general ] --task="docs/specs/XXX-feature/tasks/TASK-XXX.md"
allowed-tools:
Task, Read, Write, Edit, Bash, Grep, Glob, TodoWrite, AskUserQuestion
model:
inherit

Task Implementation

Overview

You are helping a developer implement a specific task from a task list generated by /developer-kit-specs:specs.spec-to-tasks. This command follows a focused workflow optimized for single-task implementation.

Usage

/developer-kit-specs:specs.task-implementation [--lang=java|spring|typescript|nestjs|react|python|general] --task=task-name

Arguments

ArgumentDescription
$ARGUMENTSCombined arguments passed to the command

Task Mode Detection

This command ONLY operates in Task Mode. If no --task= parameter is provided, inform the user that they should use the spec-driven flow (devkit.brainstormdevkit.spec-to-tasks) or /developer-kit:devkit.feature-development for non-spec work.

# Task Mode examples:
"--task=User login"                   → Task mode
"--task=Password reset"               → Task mode
"--task=API endpoint implementation"  → Task mode

# Invalid (no --task=):
"Add user authentication"              → Error: generate a spec/task first or use devkit.feature-development
"--lang=spring Add REST API"         → Error: generate a spec/task first or use devkit.feature-development

Core Principles

  • Ask clarifying questions only when the task or codebase leaves ambiguity: default to executing the task as written; use AskUserQuestion only for real blockers, dependency conflicts, or multiple valid interpretations.
  • Understand before acting: Read and comprehend task requirements first
  • Read files identified by agents: When launching agents, ask them to return lists of the most important files to read. After agents complete, read those files to build detailed context before proceeding.
  • Follow acceptance criteria and DoD: Implement exactly what is specified in the task and complete the documented Definition of Done
  • Use TodoWrite: Track all progress throughout
  • No time estimates: DO NOT provide or request time estimates or implementation timelines at any phase

Workflow: Task Implementation (12 Steps)

This command implements a specific task following a focused workflow:

  • T-1: Task Identification
  • T-2: Git State Check
  • T-3: Dependency Check
  • T-3.5: Knowledge Graph Validation
  • T-3.6: Contract Validation
  • T-3.7: Review Feedback Check
  • T-4: Implementation
  • T-5: Verification
  • T-6: Task Completion
  • T-6.5: Update Specs Quality
  • T-6.6: Spec Deviation Check

T-1: Task Identification

Goal: Extract and validate the task from the task list

Actions:

  1. Parse $ARGUMENTS to extract parameters:

    • Extract value from --task= parameter (can be task ID or file path)
    • Extract value from --spec= parameter (spec folder path)
    • Extract value from --lang= parameter (optional)
    • Trim whitespace
  2. Support two argument formats:

    • Format 1 (direct path): --task=docs/specs/001-feature/tasks/TASK-001.md
    • Format 2 (spec+task): --spec=docs/specs/001-feature --task=TASK-001

    If Format 2 is used, construct the task file path as: {spec}/tasks/{task}.md

  3. Find the task file:

    • If the task value is a file path (contains "/" or ends with ".md"): use it directly
    • If --spec is provided with task ID: construct path {spec}/tasks/{task}.md
    • Otherwise, look for task files matching docs/specs/*/tasks/TASK-*.md
    • Match task by ID (e.g., "TASK-001") or title in the task frontmatter
    • If multiple found, ask user which one via AskUserQuestion
    • If none found, error with helpful message
  4. Read the task file and extract:

    • Task ID and title from YAML frontmatter
    • Description
    • Acceptance criteria
    • Definition of Ready (DoR) and Definition of Done (DoD) sections
    • Dependencies from YAML frontmatter
    • Reference to specification file
    • If either section is missing, stop and instruct the user to update the task document before implementation

T-2: Git State Check

Goal: Validate DoR items related to repository state and baseline quality before implementation

Actions:

  1. Check for uncommitted changes FIRST:
    • Run git status --porcelain to check for uncommitted changes
    • If there are untracked files, staged changes, or unstaged modifications:
      • Present the user with the specific changes found
      • Ask via AskUserQuestion: "You have uncommitted changes. Please commit them before starting task implementation. Do you want to proceed anyway (not recommended) or commit first?"
      • If user chooses to commit first, stop and wait for them to run git commit
      • If user proceeds anyway, document the uncommitted state and continue at user's risk
    • Only proceed with implementation if no uncommitted changes exist OR user explicitly accepts the risk
  2. Run lint and tests (after git check passes):
    • Treat these checks as explicit DoR validation for local readiness and baseline health.
    • Detect available lint/test commands by checking package.json (scripts), Makefile, pom.xml, build.gradle, pyproject.toml, composer.json, etc.
    • Run lint first (e.g., npm run lint, make lint, ./mvnw checkstyle:check, ruff check .), then tests (e.g., npm test, make test, ./mvnw test -q, pytest, php artisan test)
    • If lint or tests fail:
      • Show the failing output to the user
      • Ask via AskUserQuestion: "Lint/tests are failing. It is recommended to fix them before starting task implementation to avoid compounding issues. Do you want to fix them first (recommended) or proceed anyway?"
      • If user chooses to fix first, stop and wait
      • If user proceeds anyway, document the failures and continue at user's risk
    • If no recognizable lint/test commands are found, skip this step and note it
    • Only proceed if lint/tests pass OR user explicitly accepts the risk

T-3: Dependency Check

Goal: Validate DoR items related to task dependencies

Actions:

  1. Check if task has dependencies and compare them against the task's DoR section
  2. If dependencies exist:
    • Read task list to check completion status
    • If dependencies not completed:
      • Ask user via AskUserQuestion: proceed anyway or complete dependencies first
  3. Record whether dependency-related DoR items are satisfied before implementation begins
  4. Proceed based on user decision

T-3.5: Knowledge Graph Validation

Goal: Validate DoR technical context against actual codebase state

Actions:

  1. Extract spec_id from task:

    • Read spec field from task frontmatter (e.g., docs/specs/001-feature-name/2026-03-07--feature-name.md)
    • Extract spec folder path
  2. Check for Knowledge Graph:

    • Look for knowledge-graph.json in the spec folder
    • If not found, skip validation with note: "No Knowledge Graph found, cannot validate dependencies"
  3. If Knowledge Graph exists:

    • Extract task requirements from "Technical Context" or task description:
      • Component references (services, repositories, controllers to use)
      • API endpoints to integrate with
      • Patterns to follow
    • Call knowledge-graph validation skill:
      /knowledge-graph validate [spec-folder] {
        components: [/* component IDs from task */],
        apis: [/* API IDs from task */],
        patterns: [/* pattern names from task */]
      }
  4. Process validation results:

    • If valid (no errors):
      • Proceed with implementation, all dependencies exist
      • Log: "Task validated against Knowledge Graph: All dependencies exist"
    • If errors found:
      • Present errors to user:
        Task validation failed against Knowledge Graph:
        Errors:
        - Component UserService not found in codebase
        - API /api/v1/payments not found, may need implementation
        
        Options:
        - "Proceed anyway" (implement missing components)
        - "Update task" (remove/fix invalid dependencies)
        - "Cancel" (fix task first, then implement)
      • Ask user via AskUserQuestion how to proceed
    • If warnings found:
      • Present warnings but allow continuation:
        Task validation warnings:
        - Pattern "Circuit Breaker" differs slightly from codebase convention
        - API /api/v1/hotels may need rate limiting
        
        Proceed with implementation?
  5. If validation passed or user chose to proceed:

    • Load KG context for implementation (optional):
      • Query KG for components the task will use
      • Query KG for API endpoints to integrate with
      • Use this context during implementation

T-3.6: Contract Validation

Goal: Verify DoR contract expectations are satisfied by completed dependencies (provides)

Prerequisite: T-3: Dependency Check completed

Actions:

  1. Extract expects from current task:

    • Read expects field from task YAML frontmatter
    • Parse the list of expected items (files, classes, functions, methods)
    • Example format:
      expects:
        - file: "src/main/java/com/hotels/search/poc/search/domain/entity/Search.java"
          symbols:
            - "Search"
            - "SearchStatus"
            - "SearchCriteria"
        - file: "src/main/java/com/hotels/search/poc/search/domain/valueobject/SearchId.java"
          symbols:
            - "SearchId"
  2. For each expected item, verify it exists:

    • Check if the file exists
    • Check if the symbols are declared in the file (using Grep or Read)
    • If file doesn't exist or symbols not found → dependency contract not satisfied
  3. Check provides from completed dependencies:

    • For each completed dependency, read its provides section
    • Match expected items against provided items
    • Track which expectations are satisfied by which dependencies
  4. If any expectations are NOT satisfied:

    • Present errors to user:
      Contract Validation Failed:
      Task expects the following but they are not provided by completed dependencies:
      
      Expected: Search entity with symbols [Search, SearchStatus]
      Provided by: None (no completed dependency provides this)
      
      Expected: SearchId value object
      Provided by: None
      
      Options:
      - "Proceed anyway" (implement missing contracts)
      - "Cancel" (complete dependencies first)
    • Ask user via AskUserQuestion how to proceed
    • If user chooses to proceed, log the unsatisfied contracts
  5. If all expectations ARE satisfied:

    • Log: "Contract validation passed: All expectations satisfied by completed dependencies"
    • Proceed to implementation with contract context

Note: This phase ensures that task dependencies provide what the current task expects at the symbol level, not just at the task completion level.


T-3.7: Review Feedback Check (Ralph Loop Support)

Goal: Load and address issues from previous review iterations (Ralph Loop mode)

Actions:

  1. Check for existing review file:
    • Construct review file path from task file:
      • If task is docs/specs/XXX/tasks/TASK-007.md
      • Look for docs/specs/XXX/tasks/TASK-007--review.md
    • Pattern: Replace .md with --review.md in the task filename

Goal: Load and address issues from previous review iterations (Ralph Loop mode)

Actions:

  1. Check for existing review file:

    • Construct review file path: {task_base_name}--review.md
    • Example: if task is TASK-007.md, look for TASK-007--review.md
    • Remove .md from task filename and append --review.md
  2. If review file exists:

    • Read the review file content
    • Extract review_status from YAML frontmatter:
      • needs_fix → Issues need to be addressed
      • passed → Review passed, no fixes needed
      • partial → Some issues fixed, others remain
    • Extract issues list from the review content
    • Each issue should have:
      • file: File affected
      • line: Line number (optional)
      • severity: blocking, warning, suggestion
      • description: What needs to be fixed
      • fix_applied: Whether this was already fixed
  3. Process review feedback:

    • If review_status is passed: Log "Review passed previously, proceeding with implementation"
    • If review_status is needs_fix or partial:
      • Filter issues where fix_applied: false or not marked as resolved
      • Group issues by file
      • Log: "Found X issues from previous review that need fixing"
      • Display issues to user with severity levels
  4. Update implementation plan:

    • Add fixing review issues as part of T-4 Implementation phase
    • Prioritize blocking issues first, then warning, then suggestion
    • Use TodoWrite to track: "[ ] Fix X review issues (Y blocking, Z warnings)"
  5. If no review file exists:

    • Normal flow - no previous review to consider
    • Log: "No previous review found, proceeding with fresh implementation"

Review File Format (TASK-XXX--review.md):

---
review_date: 2026-04-07
review_status: needs_fix
task_id: TASK-007
overall_assessment: partial
---

## Critical Issues

### Issue 1: [TITLE]
- **File**: `src/main/java/com/example/Service.java`
- **Line**: 45
- **Severity**: blocking
- **Category**: logic_error
- **Description**: Null pointer exception risk
- **Fix Applied**: false

## Summary

- **Total Issues**: 5
- **Blocking**: 2
- **Warnings**: 2
- **Suggestions**: 1

Naming Convention:

  • Task file: TASK-007.md
  • Review file: TASK-007--review.md (double dash -- before review)

T-4: Implementation

Goal: Implement the task according to acceptance criteria and the documented DoD, and fix any review issues from previous iterations

Actions:

  1. Read acceptance criteria and DoD items from the task
  2. If review feedback exists (from T-3.7):
    • Address all blocking issues first
    • Address warning issues second
    • Address suggestion issues if time permits
    • For each issue:
      • Read the affected file
      • Apply the necessary fix
      • Mark the issue as resolved in your tracking
    • Update the review file with fix_applied: true for resolved issues (optional, T-6.6 will handle this)
  3. Focus implementation on meeting criteria:
    • Implement description requirements
    • Ensure all acceptance criteria are met
    • Satisfy each DoD item with concrete evidence in code, tests, or task metadata
  4. Use appropriate sub-agents based on --lang
  5. Write clean, focused code
  6. Ralph Loop Mode: If fixing review issues, focus only on the identified issues unless the user explicitly requests additional changes

T-5: Verification

Goal: Verify implementation meets acceptance criteria and DoD

Actions:

  1. Run tests (if available)
  2. Verify each acceptance criterion is met
  3. Verify each DoD item is satisfied; do not mark the task complete until every documented DoD item has evidence
  4. If criteria are not met, iterate on implementation

T-6: Task Completion

Goal: Update task list and summarize

Actions:

  1. Confirm completion before marking the task done:

    • Ensure all DoD items are satisfied and documented
    • If the task has explicit DoR/DoD checklists, update them to reflect the validated state
  2. Auto-update task status:

    • Check all boxes in the Acceptance Criteria section ([ ][x])
    • Status automatically updates to implemented via hooks
    • The implemented_date field is set automatically
  3. Summarize:

    • What was implemented
    • Acceptance criteria and DoD items verified
    • Next Step: Run /developer-kit-specs:specs.task-review to verify the implementation, then /developer-kit-specs:specs-code-cleanup to finalize

T-6.5: Update Specs Quality

Goal: Automatically update Knowledge Graph and enrich tasks after implementation

Prerequisite: T-6: Task Completion completed successfully

Actions:

  1. Call spec-sync-context command:

    /developer-kit-specs:specs.spec-sync-context [spec-folder] --task=[TASK-ID]
  2. The spec-sync-context command will:

    • Extract provides from implemented files
    • Update Knowledge Graph with new provides entries
    • Enrich related tasks with updated technical context
    • Generate summary report of changes
  3. Log the update:

    Specs Quality updated:
    - TASK-001 provides: Search entity (Search, SearchStatus), SearchId value object
    - Knowledge Graph updated: docs/specs/[ID]/knowledge-graph.json
    - Dependent tasks enriched with new technical context
  4. If update fails:

    • Log warning but continue (non-blocking)
    • Note: "Failed to update specs quality, continuing without context sync"

Note: This ensures that:

  • Future tasks can validate their expectations against actual implementations
  • Knowledge Graph stays synchronized with codebase
  • Related tasks benefit from updated technical context

T-6.6: Spec Deviation Check

Goal: Detect if implementation deviated from specification

Prerequisite: T-6: Task Completion completed

Actions:

  1. Compare task acceptance criteria with actual implementation:

    • Review what was specified vs. what was actually implemented
    • Check for new features added beyond the task scope
    • Identify any acceptance criteria that were changed or dropped
  2. If deviations found:

    • Log each deviation with context and rationale
    • Append deviation entry to docs/specs/[id]/decision-log.md:
      ## DEC-NNN: Spec Deviation - [Brief Description]
      - **Date**: YYYY-MM-DD
      - **Task**: TASK-XXX
      - **Phase**: Implementation
      - **Context**: [Why the deviation occurred]
      - **Decision**: [What changed from the spec]
      - **Alternatives Considered**: [Could we have followed spec instead?]
      - **Impact**: [Files/components affected, spec sections no longer accurate]
      - **Decided By**: user / AI recommendation
  3. Ask user via AskUserQuestion:

    Options:
    - "Yes, sync now" - Run `/developer-kit-specs:specs.spec-sync-with-code` to update specification
    - "Later" - Skip for now, remember to sync later
    - "Skip (deviation intentional)" - No sync needed, deviation is documented
  4. If no deviations:

    • Log "Implementation matches spec, no sync needed"
    • Proceed to summary
  5. For spec-driven chat sessions outside this command:

    • If the assistant used docs/specs/[id]/ as working context and the session clarified or changed what should be built, update the affected spec artifacts before concluding.
    • Treat specification files as deliverables whenever they directly shaped implementation decisions, not as read-only references.
    • If no spec changes are required, state that explicitly with a short rationale.

Task File Format

Each task file in docs/specs/[ID]/tasks/TASK-*.md must have a YAML frontmatter with the following structure:

---
id: TASK-001
title: "Task Title"
spec: docs/specs/[ID-feature]/2026-03-07--feature-name.md
lang: spring
status: pending
dependencies: []
provides:
  - file: "src/main/java/com/example/Task.java"
    symbols:
      - "Task"
      - "TaskStatus"
    type: "entity"
expects:
  - file: "src/main/java/com/example/Other.java"
    symbols:
      - "OtherService"
    type: "service"
---

Frontmatter Fields:

FieldRequiredDescription
idYesUnique task identifier (e.g., TASK-001)
titleYesHuman-readable task title
specYesReference to the specification file
langYesProgramming language/framework (spring, typescript, nestjs, general, etc.)
statusNoCurrent status: pending, in_progress, implemented, reviewed, completed, superseded, optional, blocked
started_dateNoDate work started (YYYY-MM-DD)
implemented_dateNoDate implementation finished (YYYY-MM-DD)
reviewed_dateNoDate review completed (YYYY-MM-DD)
completed_dateNoDate cleanup completed (YYYY-MM-DD)
cleanup_dateNoDate code cleanup finished (YYYY-MM-DD)
dependenciesNoArray of task IDs this task depends on
providesNoWhat this task makes available (see format below)
expectsNoWhat this task requires from dependencies
complexityNoComplexity score (0-100)
optionalNoBoolean - if true, task is optional
parent_taskNoParent task ID (for subtasks)
supersedesNoArray of task IDs this task supersedes

provides/expects Format:

  • file: Relative path to the source file
  • symbols: Array of symbols (classes, interfaces, functions, methods) provided/required
  • type: Type of component (entity, value-object, service, repository, controller, function, etc.)

Standardized Status Workflow

Status values MUST be one of the following (auto-managed by hooks):

pending → in_progress → implemented → reviewed → completed
              ↓
          blocked (can return to in_progress)

Status Transitions:

  • pending: Initial state, no dates required
  • in_progress: Work started → sets started_date
  • implemented: Coding complete → sets implemented_date
  • reviewed: Review passed → sets reviewed_date
  • completed: Cleanup done → sets completed_date and cleanup_date
  • superseded: Task replaced by others
  • optional: Task is not required
  • blocked: Task cannot proceed (temporary state)

Status updates happen automatically when you:

  • Edit the task file and save changes
  • Check/uncheck checkboxes in the task content
  • The hooks detect changes and update frontmatter accordingly

Language/Framework Selection

Parse $ARGUMENTS to detect the optional --lang parameter:

  • --lang=spring or --lang=java: Use Java/Spring Boot specialized agents
  • --lang=typescript or --lang=ts: Use TypeScript specialized agents
  • --lang=nestjs: Use NestJS specialized agents
  • --lang=react: Use React frontend specialized agents
  • --lang=python or --lang=py: Use Python specialized agents
  • --lang=general or no flag: Use general-purpose agents (default)

Decision Logging Protocol

Throughout the workflow, whenever a non-trivial choice is made between alternatives, append a DEC entry to docs/specs/[id]/decision-log.md.

When to log decisions:

Task Mode T-4 (Implementation): When implementation requires non-trivial choices:

  • Choosing between multiple implementation options
  • Deviating from task specification
  • Scope changes or requirement drops
  • Adding features not in original spec

Decision Log Format:

## DEC-NNN: [Decision Title]
- **Date**: YYYY-MM-DD
- **Task**: TASK-XXX
- **Phase**: Implementation
- **Context**: [Why this decision was necessary]
- **Decision**: [What was decided]
- **Alternatives Considered**: [What was rejected and why]
- **Impact**: [Files/components affected]
- **Decided By**: user / AI recommendation accepted

How to find spec folder:

  • Extract from task frontmatter spec: field
  • If no decision-log.md exists, create it with header table

Todo Management

Throughout the process, maintain a todo list like:

[ ] T-1: Task Identification
[ ] T-2: Git State Check
[ ] T-3: Dependency Check
[ ] T-3.5: Knowledge Graph Validation
[ ] T-3.6: Contract Validation
[ ] T-4: Implementation
[ ] T-5: Verification
[ ] T-6: Task Completion
[ ] T-6.5: Update Specs Quality
[ ] T-6.6: Spec Deviation Check

Update the status as you progress through each phase.


Examples

After generating tasks with /developer-kit-specs:specs.spec-to-tasks, implement individual tasks:

# Implement a specific task from the task list
/developer-kit-specs:specs.task-implementation --lang=spring --task="User login"
/developer-kit-specs:specs.task-implementation --lang=typescript --task="Password reset"
/developer-kit-specs:specs.task-implementation --lang=nestjs --task="JWT token generation"
/developer-kit-specs:specs.task-implementation --lang=react --task="Login form UI"
/developer-kit-specs:specs.task-implementation --lang=python --task="API endpoint implementation"

# Task Mode with general agents
/developer-kit-specs:specs.task-implementation --lang=general --task="Database schema update"

# Using task ID
/developer-kit-specs:specs.task-implementation --lang=spring --task="TASK-001"
/developer-kit-specs:specs.task-implementation --lang=typescript --task="TASK-002"

Expected Output:

# Successful execution - Task identified
→ Task identified: TASK-001 "User login"
→ Checking git status...
→ Validating dependencies...
→ Knowledge Graph validation: ✓ All dependencies exist
→ Contract validation: ✓ All expectations satisfied
→ Task implemented successfully
→ Next: Run `/developer-kit-specs:specs.task-review` then `/developer-kit-specs:specs-code-cleanup`

# Or with errors - Dependency not met
→ Task identified: TASK-002 "Password reset"
→ Dependency check failed:
  - TASK-001 not completed (required by this task)
  Options:
  - [1] Proceed anyway
  - [2] Complete dependencies first
  - [3] Cancel

# Or with invalid task
/developer-kit-specs:specs.task-implementation --task="Invalid task"
→ Error: Task "Invalid task" not found
  Suggestion: Use --task=TASK-XXX or provide full path to task file

plugins

CHANGELOG.md

context7.json

CONTRIBUTING.md

README_CN.md

README_ES.md

README_IT.md

README.md

tessl.json

tile.json