Comprehensive developer toolkit providing reusable skills for Java/Spring Boot, TypeScript/NestJS/React/Next.js, Python, PHP, AWS CloudFormation, AI/RAG, DevOps, and more.
90
90%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Validates implementation completion by checking tasks, logic, tests, and code quality against specifications. Use when you need to verify that all tasks are properly completed.
/speckit.verify $ARGUMENTS| Argument | Description |
|---|---|
$ARGUMENTS | Combined arguments passed to the command |
Agent Selection: To execute this task, use the following approach:
general-purpose agent with appropriate domain expertise$ARGUMENTSYou MUST consider the user input before proceeding (if not empty).
Goal: Perform comprehensive verification of the implemented feature against specifications, ensuring all
requirements are met, tests pass, and code quality standards are satisfied. This command runs AFTER /speckit.implement
completes.
Critical Principle: This is a READ-ONLY analysis command. The ONLY file that may be written is
verification-report.md in the feature directory. No other modifications are permitted.
Run .specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks from repo root and parse JSON
for:
For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'''m Groot' (or double-quote if possible: "I'm Groot").
Abort if implementation appears incomplete with guidance to run /speckit.implement first.
Load artifacts in order (progressive, on-demand):
Required:
Optional (load if present):
Codebase:
Check tasks.md:
For each task in tasks.md:
Output:
TASK COMPLETION STATUS
Total tasks: X
Completed: X [X%]
Incomplete: X [X%]
Status: [PASS/FAIL]If incomplete tasks found:
INCOMPLETE TASKS:
- [Task ID]: [Description] → [Reason not completed]Verification FAILS if any task incomplete unless justified (explicitly marked as deferred/optional in tasks.md).
Map implementation to specification:
For each requirement in spec.md:
Output format:
REQUIREMENT: [ID or description]
Status: [COVERED/PARTIAL/MISSING]
Implementation: [File paths]
Evidence: [Specific code references]
Acceptance Criteria Met: X/Y
Issues: [If any]Verify against plan.md:
Output:
ARCHITECTURE COMPLIANCE
Tech Stack: [COMPLIANT/VIOLATIONS]
- [Package]: [Expected version] → [Actual version] [Status]
Structure: [COMPLIANT/VIOLATIONS]
- [Expected pattern] → [Actual implementation] [Status]
Design Patterns: [COMPLIANT/VIOLATIONS]
- [Pattern]: [Assessment]If data-model.md exists:
For each entity:
Check:
Output:
DATA MODEL VERIFICATION
Entity: [Name]
- Fields: [X/Y present, types correct]
- Relationships: [X/Y implemented]
- Validations: [List]
- Status: [PASS/FAIL]If contracts/ exists:
For each contract file:
Output:
CONTRACT: [Filename]
Endpoint: [Method] [Path]
- Route: [MATCH/MISMATCH]
- Request Schema: [VALID/INVALID]
- Response Schema: [VALID/INVALID]
- Error Handling: [COMPLETE/INCOMPLETE]
- Logic Compliance: [PASS/FAIL]
Status: [PASS/FAIL]Run test suites:
# Run all test commands from plan.md or infer from project type
npm test # or pytest, cargo test, etc.
npm run test:coverage # or equivalentOutput:
TEST EXECUTION RESULTS
Total Tests: X
Passed: X [100%/less]
Failed: X
Skipped: X
Duration: Xs
COVERAGE METRICS
Lines: X%
Branches: X%
Functions: X%
Statements: X%
Threshold: Y% [MET/NOT MET]
TEST QUALITY
Positive Scenarios: [COVERED/GAPS]
Negative Scenarios: [COVERED/GAPS]
Edge Cases: [COVERED/GAPS]
Integration: [COVERED/GAPS]If tests fail:
FAILED TESTS:
[Test suite] → [Test name]
Error: [Message]
File: [Path:Line]
Expected: [Value]
Actual: [Value]Static Analysis:
Run project linter (ESLint, Pylint, Clippy, etc.):
If TypeScript/typed language:
Output:
CODE QUALITY SCAN
Linter: [Tool name]
- Errors: X
- Warnings: X
- Status: [PASS/FAIL]
Type Checker:
- Errors: X
- Status: [PASS/FAIL]
Formatter:
- Violations: X
- Status: [PASS/FAIL]Security Checks:
Performance Checks:
Output:
SECURITY AUDIT
- Hardcoded Secrets: [NONE/FOUND]
- Input Validation: [COMPLETE/GAPS]
- Injection Prevention: [PROTECTED/VULNERABLE]
- Auth/Authz: [CORRECT/ISSUES]
- Dependencies: [SECURE/VULNERABILITIES]
Status: [PASS/FAIL]
PERFORMANCE AUDIT
- Algorithm Efficiency: [OPTIMAL/CONCERNS]
- Database Queries: [OPTIMIZED/ISSUES]
- Resource Management: [PROPER/LEAKS]
Status: [PASS/FAIL]Check documentation quality:
Output:
DOCUMENTATION REVIEW
README: [UPDATED/OUTDATED/N/A]
API Docs: [ACCURATE/OUTDATED/MISSING]
Code Comments: [ADEQUATE/SPARSE]
Public APIs: [DOCUMENTED/UNDOCUMENTED]
Status: [PASS/FAIL]If .specify/memory/constitution.md exists:
Check implementation against project principles:
Output:
CONSTITUTION COMPLIANCE
Principle: [Name]
- Status: [COMPLIANT/VIOLATION]
- Evidence: [Description]
Overall: [COMPLIANT/VIOLATIONS]If FEATURE_DIR/checklists/ exists:
For each checklist file:
Output:
CHECKLIST STATUS
| Checklist | Total | Completed | Incomplete | % | Status |
|-----------|-------|-----------|------------|------|--------|
| ux.md | 12 | 12 | 0 | 100% | ✓ PASS |
| api.md | 15 | 14 | 1 | 93% | ✗ FAIL |
Overall: [PASS if all 100% / FAIL if any incomplete]Create structured report at FEATURE_DIR/verification-report.md:
# Feature Verification Report
**Feature**: [Name from spec.md]
**Verification Date**: [ISO date]
**Verification Status**: [PASS/FAIL]
---
## Executive Summary
[Overall pass/fail with key metrics]
**Quick Stats**:
- Tasks Completed: X/Y (Z%)
- Requirements Covered: X/Y (Z%)
- Tests Passed: X/Y (Z%)
- Code Quality: [PASS/FAIL]
- Overall Status: [PASS/FAIL]
---
## Task Completion
[Results from step 3]
---
## Requirements Coverage
[Results from step 4]
**Coverage Summary**:
- Functional Requirements: X/Y covered
- Non-Functional Requirements: X/Y covered
- User Stories: X/Y implemented
- Acceptance Criteria: X/Y met
---
## Architecture Compliance
[Results from step 5]
---
## Data Model Validation
[Results from step 6 if applicable]
---
## Contract Compliance
[Results from step 7 if applicable]
---
## Test Results
[Results from step 8]
---
## Code Quality
[Results from step 9]
---
## Security & Performance
[Results from step 10]
---
## Documentation
[Results from step 11]
---
## Constitution Compliance
[Results from step 12 if applicable]
---
## Checklist Status
[Results from step 13 if applicable]
---
## Issues Found
[If FAIL, list all issues with severity]
### Critical Issues
[Must fix before merge]
### High Priority
[Should fix before merge]
### Medium Priority
[Can fix post-merge if needed]
### Low Priority
[Optional improvements]
---
## Remediation Steps
[If FAIL, provide specific actions]
1. **[Issue category]**:
- File: [Path]
- Problem: [Description]
- Fix: [Specific steps]
- Verification: [How to confirm fixed]
---
## Sign-Off
- [ ] All critical issues resolved
- [ ] All high priority issues resolved
- [ ] Tests pass at 100%
- [ ] Code quality meets standards
- [ ] Documentation updated
- [ ] Ready for review/merge
---
## Next Steps
[If PASS]: Feature ready for [code review/staging/production]
[If FAIL]: Fix issues listed above and re-run `/speckit.verify`
---
## Metrics
| Metric | Target | Actual | Status |
|--------|--------|--------|--------|
| Task Completion | 100% | X% | [✓/✗] |
| Requirements Coverage | 100% | X% | [✓/✗] |
| Test Pass Rate | 100% | X% | [✓/✗] |
| Test Coverage | ≥90% | X% | [✓/✗] |
| Code Quality | 0 errors | X errors | [✓/✗] |
| Security Issues | 0 | X | [✓/✗] |If PASS:
✅ VERIFICATION PASSED
Feature implementation successfully verified!
Summary:
- All tasks completed
- All requirements covered
- All tests passing (X% coverage)
- Code quality standards met
- No security issues found
Report: [path to verification-report.md]
Suggested commit message:
feat: complete [feature name] implementation
Next steps:
- Create pull request for review
- Deploy to staging environment
- Update project documentationIf FAIL:
❌ VERIFICATION FAILED
Implementation has issues requiring attention.
Critical Issues: X
High Priority: X
Medium Priority: X
Top Issues:
1. [Issue summary]
2. [Issue summary]
3. [Issue summary]
Full details: [path to verification-report.md]
Next steps:
1. Review verification report
2. Fix critical and high priority issues
3. Re-run `/speckit.verify`
Do NOT proceed to merge until verification passes.CRITICAL (Must Pass):
HIGH (Should Pass):
MEDIUM (Nice to Pass):
LOW (Optional):
PASS requires:
FAIL if any:
Every finding must include:
No speculation allowed:
Progressive analysis:
Compact reporting:
$ARGUMENTS
verification-report.md/speckit.verify example-inputdocs
plugins
developer-kit-ai
developer-kit-aws
agents
docs
skills
aws
aws-cli-beast
aws-cost-optimization
aws-drawio-architecture-diagrams
aws-sam-bootstrap
aws-cloudformation
aws-cloudformation-auto-scaling
aws-cloudformation-bedrock
aws-cloudformation-cloudfront
aws-cloudformation-cloudwatch
aws-cloudformation-dynamodb
aws-cloudformation-ec2
aws-cloudformation-ecs
aws-cloudformation-elasticache
references
aws-cloudformation-iam
references
aws-cloudformation-lambda
aws-cloudformation-rds
aws-cloudformation-s3
aws-cloudformation-security
aws-cloudformation-task-ecs-deploy-gh
aws-cloudformation-vpc
references
developer-kit-core
agents
commands
skills
developer-kit-devops
developer-kit-java
agents
commands
docs
skills
aws-lambda-java-integration
aws-rds-spring-boot-integration
aws-sdk-java-v2-bedrock
aws-sdk-java-v2-core
aws-sdk-java-v2-dynamodb
aws-sdk-java-v2-kms
aws-sdk-java-v2-lambda
aws-sdk-java-v2-messaging
aws-sdk-java-v2-rds
aws-sdk-java-v2-s3
aws-sdk-java-v2-secrets-manager
clean-architecture
graalvm-native-image
langchain4j-ai-services-patterns
references
langchain4j-mcp-server-patterns
references
langchain4j-rag-implementation-patterns
references
langchain4j-spring-boot-integration
langchain4j-testing-strategies
langchain4j-tool-function-calling-patterns
langchain4j-vector-stores-configuration
references
qdrant
references
spring-ai-mcp-server-patterns
spring-boot-actuator
spring-boot-cache
spring-boot-crud-patterns
spring-boot-dependency-injection
spring-boot-event-driven-patterns
spring-boot-openapi-documentation
spring-boot-project-creator
spring-boot-resilience4j
spring-boot-rest-api-standards
spring-boot-saga-pattern
spring-boot-security-jwt
assets
references
scripts
spring-boot-test-patterns
spring-data-jpa
references
spring-data-neo4j
references
unit-test-application-events
unit-test-bean-validation
unit-test-boundary-conditions
unit-test-caching
unit-test-config-properties
references
unit-test-controller-layer
unit-test-exception-handler
references
unit-test-json-serialization
unit-test-mapper-converter
references
unit-test-parameterized
unit-test-scheduled-async
references
unit-test-service-layer
references
unit-test-utility-methods
unit-test-wiremock-rest-api
references
developer-kit-php
developer-kit-project-management
developer-kit-python
developer-kit-specs
commands
docs
hooks
test-templates
tests
skills
developer-kit-tools
developer-kit-typescript
agents
docs
hooks
rules
skills
aws-cdk
aws-lambda-typescript-integration
better-auth
clean-architecture
drizzle-orm-patterns
dynamodb-toolbox-patterns
references
nestjs
nestjs-best-practices
nestjs-code-review
nestjs-drizzle-crud-generator
nextjs-app-router
nextjs-authentication
nextjs-code-review
nextjs-data-fetching
nextjs-deployment
nextjs-performance
nx-monorepo
react-code-review
react-patterns
shadcn-ui
tailwind-css-patterns
tailwind-design-system
references
turborepo-monorepo
typescript-docs
typescript-security-review
zod-validation-utilities
references
github-spec-kit