CtrlK
BlogDocsLog inGet started
Tessl Logo

giuseppe-trisciuoglio/developer-kit

Comprehensive developer toolkit providing reusable skills for Java/Spring Boot, TypeScript/NestJS/React/Next.js, Python, PHP, AWS CloudFormation, AI/RAG, DevOps, and more.

90

Quality

90%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Risky

Do not use without reviewing

This version of the tile failed moderation
Moderation pipeline encountered an internal error
Overview
Quality
Evals
Security
Files

specs.task-review.mdplugins/developer-kit-specs/commands/

description:
Provides capability to verify that implemented tasks meet specifications and pass code review. Use when needing to validate a completed task from devkit.task-implementation against its specification.
argument-hint:
[ --lang=java|spring|typescript|nestjs|react|python|general ] [ --task="docs/specs/XXX-feature/tasks/TASK-XXX.md" ]
allowed-tools:
Task, Read, Write, Edit, Bash, Grep, Glob, TodoWrite, AskUserQuestion
model:
inherit

Task Review

Verifies that implemented tasks meet specifications and pass code quality standards. This is the bridge between implementation and verification.

Overview

This command reviews a completed task to ensure:

  1. Task Implementation: The task was implemented according to its specifications
  2. Spec Compliance: The implementation aligns with the functional specification
  3. Code Quality: The code passes code review standards
  4. Acceptance Criteria: All acceptance criteria are met
  5. Definition of Done: The documented completion conditions are fully satisfied

Input: docs/specs/[id]/tasks/TASK-XXX.md (from devkit.spec-to-tasks) Output: Review report with pass/fail status and findings

Workflow Position

Idea → Functional Specification → Tasks → Implementation → Review → Code Cleanup → Done
              (brainstorm)           (spec-to-tasks)       (task-implementation)  (task-review)   (code-cleanup)

Usage

# Review a specific task
/specs:task-review docs/specs/001-user-auth/tasks/TASK-001.md

# With language specification for code review
/specs:task-review --lang=spring docs/specs/001-user-auth/tasks/TASK-001.md
/specs:task-review --lang=typescript docs/specs/001-user-auth/tasks/TASK-001.md
/specs:task-review --lang=nestjs docs/specs/001-user-auth/tasks/TASK-001.md
/specs:task-review --lang=react docs/specs/001-user-auth/tasks/TASK-001.md
/specs:task-review --lang=python docs/specs/001-user-auth/tasks/TASK-001.md
/specs:task-review --lang=general docs/specs/001-user-auth/tasks/TASK-001.md

Arguments

ArgumentRequiredDescription
--langNoTarget language/framework: java, spring, typescript, nestjs, react, python, general
--no-confirmNoSkip user confirmation and auto-callback. Use when running inside automated loops (e.g., Ralph Loop).
task-file-pathYesPath to the task file (e.g., docs/specs/001-user-auth/tasks/TASK-001.md)

Current Context

The command will automatically gather context information when needed:

  • Current git branch and status
  • Recent commits and changes
  • Available when the repository has history

You are reviewing an implemented task to verify it meets specifications and passes code review. Follow a systematic approach: analyze the task, verify implementation, check spec compliance, and perform code review.

Core Principles

  • Thorough verification: Check every acceptance criterion and every DoD item
  • Spec alignment: Ensure implementation matches functional requirements
  • Code quality: Verify code passes review standards
  • Evidence-based: Base findings on actual code, not assumptions
  • Use TodoWrite: Track all progress throughout
  • No time estimates: DO NOT provide or request time estimates

Phase 1: Task Analysis

Goal: Read and understand the task and its specifications

Input: $ARGUMENTS (task file path or --spec and --task parameters)

Actions:

  1. Create todo list with all phases

  2. Parse $ARGUMENTS to extract:

    • --lang parameter (language/framework for code review)
    • task-file-path (path to task file) OR --spec and --task parameters
  3. Support two argument formats:

    • Format 1 (direct path): /specs:task-review docs/specs/001-feature/tasks/TASK-001.md
    • Format 2 (spec+task): /specs:task-review --spec=docs/specs/001-feature --task=TASK-001

    If --spec and --task are provided, construct the task file path as: {spec}/tasks/{task}.md

  4. Read the task file (docs/specs/[id]/tasks/TASK-XXX.md)

  5. Extract:

    • Task ID and title
    • Description
    • Acceptance criteria
    • Definition of Ready (DoR) and Definition of Done (DoD) sections
    • Dependencies
    • Reference to specification file
    • If either section is missing, stop the review and require the task document to be updated before continuing
  6. Read the functional specification file (from task's spec reference)

  7. Verify both files exist and are valid

  8. If files not found, ask user for correct path via AskUserQuestion


Phase 2: Implementation Verification

Goal: Verify the task was implemented according to specifications

Actions:

  1. Identify what files/components were created for this task:

    • Check git diff to see what changed since task was started
    • Look for new files matching the task scope
    • Review implementation details
  2. Verify implementation matches task description:

    • Compare implemented functionality with task description
    • Check if all described features are present
    • Identify any deviations or missing parts
  3. Document findings:

    • What was implemented vs. what was specified
    • Any deviations from the original plan
    • Additional changes that were made
  4. Read decision-log.md if exists:

    • Check for decision-log.md in the spec folder (extract from task frontmatter spec: field)
    • If file exists, read any DEC entries related to this task (TASK-XXX)
    • Use decision context to understand WHY deviations were made
    • Reference specific decision IDs when explaining deviations in findings

Phase 3: Acceptance Criteria and DoD Validation

Goal: Verify all acceptance criteria and DoD items are met

Actions:

  1. List all acceptance criteria from the task file

  2. List all DoD items from the task file

  3. For each acceptance criterion and DoD item:

    • Identify code/tests/review evidence that validate it
    • Check if tests exist and pass when relevant
    • Verify the requirement is actually met
  4. Mark each item as:

    • ✅ Met (with evidence)
    • ❌ Not met (with explanation)
    • ⚠️ Partially met (with details) — treated as FAILED for review_status
  5. Update traceability-matrix.md:

    • Read docs/specs/[id]/traceability-matrix.md (extract from task frontmatter spec: field)
    • For this task (TASK-XXX), update the matrix:
      • Fill in "Test Files" column with test file names created for this task
      • Fill in "Code Files" column with source files created for this task
      • Update "Status" to "Implemented" for REQ-IDs covered by this task
    • Save updated matrix back to docs/specs/[id]/traceability-matrix.md

Phase 4: Specification Compliance Check

Goal: Ensure implementation aligns with functional specification

Actions:

  1. Review the functional specification (already loaded in Phase 1) to verify compliance
  2. Compare implementation against:
    • User stories and use cases
    • Business rules
    • Integration requirements
    • Data requirements
  3. Identify any gaps or misalignments
  4. Check if implementation introduces any out-of-scope changes

Phase 5: Code Review

Goal: Verify code passes quality standards

Actions:

  1. Based on --lang parameter, select appropriate code review agent:
LanguageCode Review Agent
javadeveloper-kit-java:spring-boot-code-review-expert
springdeveloper-kit-java:spring-boot-code-review-expert
typescriptdeveloper-kit:general-code-reviewer
nestjsdeveloper-kit-typescript:nestjs-code-review-expert
reactdeveloper-kit:general-code-reviewer
pythondeveloper-kit-python:python-code-review-expert
phpdeveloper-kit-php:php-code-review-expert
generaldeveloper-kit:general-code-reviewer
  1. Launch code review agent to analyze implemented code
  2. Collect review findings
  3. Categorize issues by severity:
    • Critical (must fix)
    • Major (should fix)
    • Minor (recommended fix)
    • Info (suggestions)

Phase 6: Review Report Generation

Goal: Create comprehensive review report

Actions:

  1. Compile all findings into a review report
  2. Determine review_status using this rule:
    • PASSED: ALL acceptance criteria ✅ AND ALL DoD items ✅ AND no critical/major code issues
    • FAILED: ANY criterion is ❌ or ⚠️, OR ANY DoD item is ❌ or ⚠️, OR critical/major code issues found
  3. Generate the report in markdown format with YAML frontmatter:
---
review_status: PASSED   # or FAILED
task_id: TASK-XXX
task_title: [Task Title]
spec_file: [spec-file.md]
review_date: [ISO date]
language: [language]
summary:
  implementation: COMPLETE|INCOMPLETE
  acceptance_criteria: ALL_MET|FAILED
  definition_of_done: ALL_MET|FAILED
  spec_compliance: COMPLIANT|DEVIATIONS|NON_COMPLIANT
  code_review: PASSED|ISSUES|FAILED
critical_issues: N   # required if FAILED
major_issues: N      # required if FAILED
minor_issues: N
---

# Task Review Report: TASK-XXX

**Task**: [Task Title]
**Specification**: [spec-file.md]
**Reviewed**: [date]
**Language**: [language]

## Summary

| Category | Status |
|----------|--------|
| Implementation | ✅ Complete / ⚠️ Partial / ❌ Incomplete |
| Acceptance Criteria | ✅ All Met / ⚠️ Partial / ❌ Failed |
| Definition of Done | ✅ All Met / ⚠️ Partial / ❌ Failed |
| Spec Compliance | ✅ Compliant / ⚠️ Deviations / ❌ Non-compliant |
| Code Review | ✅ Passed / ⚠️ Issues Found / ❌ Failed |

**Overall Result**: ✅ PASSED / ❌ FAILED

## Implementation Verification

- **Implemented**: [description]
- **Deviations**: [list if any]

## Acceptance Criteria

| Criterion | Status | Evidence |
|-----------|--------|----------|
| Criterion 1 | ✅/⚠️/❌ | [evidence] |
| Criterion 2 | ✅/⚠️/❌ | [evidence] |

## Definition of Done

| DoD Item | Status | Evidence |
|----------|--------|----------|
| DoD item 1 | ✅/⚠️/❌ | [evidence] |
| DoD item 2 | ✅/⚠️/❌ | [evidence] |

## Specification Compliance

- **Compliant**: Yes/No
- **Deviations**: [list]


## Traceability

-- **Traceability Coverage**: N/N requirements covered (X%)
-- **REQ-IDs**: [List of REQ-IDs covered by this task]
## Code Review Findings

### Critical
- [list]

### Major
- [list]

### Minor
- [list]

### Info
- [list]

## Recommendations

- [actionable recommendations]

## Decisions Referenced

- [List any DEC-ID entries from decision-log.md that explain deviations]


3. **If decision-log.md exists, include decisions referenced**:
   - Search for DEC entries mentioning this task (TASK-XXX)
   - Summarize key decisions that affected implementation
   - Reference specific decision IDs (e.g., See DEC-003)
  1. Save report to: docs/specs/[id]/tasks/TASK-XXX--review.md

Phase 7: Review Confirmation

Goal: Present review results to user

Actions:

  1. If --no-confirm was passed: SKIP this phase entirely. Do NOT use AskUserQuestion. Do NOT invoke task-implementation automatically. Simply save the review report and terminate so the caller (e.g., Ralph Loop) can read the report and decide the next step.

  2. Present the review report to the user

  3. Ask for confirmation via AskUserQuestion:

    • Option A: Review complete, task approved
    • Option B: Issues found, needs revision
    • Option C: Need additional verification
  4. If issues found:

    • List specific issues that need fixing
    • Save findings to the review report at docs/specs/[id]/tasks/TASK-XXX--review.md
    • Invoke /specs:task-implementation --lang=[language] --task="docs/specs/[id]/tasks/TASK-XXX.md"
    • Reference the review report path so implementation can read the detailed findings
    • Track unresolved items
    • Note: Before re-implementing, consider running /devkit.spec-review [spec-folder] to verify the spec is still accurate if issues suggest spec-level problems
  5. If review approved (no issues or all issues fixed):

    • Auto-update task status: Check the boxes in the DoD/Review section of the task
    • Status automatically updates to reviewed when all checkboxes are checked
    • Proceed to code cleanup:
    /specs:code-cleanup --lang=[language] --task="docs/specs/[id]/tasks/TASK-XXX.md"

Phase 8: Summary

Goal: Document what was accomplished

Actions:

  1. Mark all todos complete
  2. Summarize:
    • Task Reviewed: Path to task file
    • Specification: Reference to functional spec
    • Implementation Status: Complete/Partial/Incomplete
    • Acceptance Criteria: All met / Partial / Failed
    • Code Review Status: Passed / Issues / Failed
    • Review Report: docs/specs/[id]/tasks/TASK-XXX--review.md
    • Next Step:
      • If approved: Run /specs:code-cleanup to finalize the task
      • If issues found: Return to /specs:task-implementation to fix issues

Integration with Workflow

This command completes the verification loop:

/specs:brainstorm
    ↓
[Creates: docs/specs/[id]/YYYY-MM-DD--feature-name.md]
    ↓
/specs:spec-to-tasks --lang=[language] docs/specs/[id]/
    ↓
[Creates: docs/specs/[id]/tasks/TASK-XXX.md]
    ↓
/specs:task-implementation--lang=[language] --task="docs/specs/[id]/tasks/TASK-XXX.md"
    ↓
[Implements task]
    ↓
/specs:task-review --lang=[language] "docs/specs/[id]/tasks/TASK-XXX.md"
    ↓
[Verifies implementation, generates review report]
    ↓
[If issues: back to implementation]
[If approved: proceed to next task]

Examples

Example 1: Review User Authentication Task

# Review a completed task
/specs:task-review --lang=spring docs/specs/001-user-auth/tasks/TASK-001.md

Example 2: Review Checkout Task

/specs:task-review --lang=typescript docs/specs/005-checkout/tasks/TASK-003.md

Example 3: Review API Integration Task

/specs:task-review --lang=python docs/specs/010-payment/tasks/TASK-002.md

Todo Management

Throughout the process, maintain a todo list like:

[ ] Phase 1: Task Analysis
[ ] Phase 2: Implementation Verification
[ ] Phase 3: Acceptance Criteria Validation
[ ] Phase 4: Specification Compliance Check
[ ] Phase 5: Code Review
[ ] Phase 6: Review Report Generation
[ ] Phase 7: Review Confirmation
[ ] Phase 8: Summary

Update the status as you progress through each phase.


Note: This command ensures quality control in the development workflow by verifying that implemented tasks meet specifications and pass code review standards before proceeding to the next task.

plugins

CHANGELOG.md

context7.json

CONTRIBUTING.md

README_CN.md

README_ES.md

README_IT.md

README.md

tessl.json

tile.json