CtrlK
BlogDocsLog inGet started
Tessl Logo

sdd-verify

Validate that implementation matches specs, design, and tasks. Trigger: When the orchestrator launches you to verify a completed (or partially completed) change.

61

Quality

51%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/sdd-verify/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

25%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is too vague to effectively guide skill selection. It lacks concrete actions (what specific validation steps are performed), uses internal jargon ('orchestrator launches you') instead of natural user language, and doesn't clearly differentiate itself from other verification or review skills. The second-person 'you' in the trigger clause also violates the third-person voice requirement.

Suggestions

List specific validation actions such as 'Compares code changes against specification requirements, checks design pattern adherence, verifies task acceptance criteria are met'.

Replace the orchestrator-focused trigger with natural user terms: 'Use when verifying code changes match requirements, checking implementation correctness, or reviewing completed features against design documents'.

Add distinct trigger terms users might say, such as 'verify implementation', 'check against spec', 'review completed work', 'acceptance criteria', 'design compliance'.

DimensionReasoningScore

Specificity

The description uses vague language like 'validate that implementation matches specs, design, and tasks' without listing concrete actions. It doesn't specify what kind of validation (e.g., running tests, comparing code against requirements, checking design patterns).

1 / 3

Completeness

It has a vague 'what' (validate implementation matches specs) and a 'when' trigger clause, but the trigger is about an orchestrator launching it rather than explicit user-facing triggers. The 'when' is present but not useful for distinguishing from other skills.

2 / 3

Trigger Term Quality

The trigger terms are internal/technical jargon ('orchestrator launches you') rather than natural keywords a user would say. Terms like 'verify', 'validate', 'specs', 'design' are generic and don't reflect how users naturally phrase requests.

1 / 3

Distinctiveness Conflict Risk

The concept of validating implementation against specs is somewhat specific, but the vague language ('specs, design, and tasks') could overlap with testing skills, code review skills, or QA skills. The orchestrator-based trigger helps narrow it slightly.

2 / 3

Total

6

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured verification workflow skill with excellent actionability and workflow clarity. The step-by-step process with explicit validation checkpoints, severity classifications, and a detailed compliance matrix template makes it highly executable. The main weakness is the inline report template adding significant length, and some minor redundancy in explanations, though overall the content density is justified by the complexity of the task.

Suggestions

Consider moving the full report template (Step 8) to a separate reference file (e.g., `verify-report-template.md`) and linking to it, keeping only a brief summary of required sections inline.

Remove the redundant emphasis on 'static analysis is not enough' which appears in both the Purpose section and Step 6 — state it once definitively.

DimensionReasoningScore

Conciseness

The skill is fairly long but most content is structural (workflow steps, templates, decision trees). Some redundancy exists — e.g., the purpose section explains 'static analysis alone is NOT enough' which is repeated in Step 6. The pseudocode tree diagrams add visual clarity but also bulk. Overall mostly efficient with some tightening possible.

2 / 3

Actionability

Highly actionable with concrete detection logic (package.json, pyproject.toml, Makefile fallbacks), specific commands to run, exact output formats to capture (exit codes, pass/fail counts), and a complete report template with table schemas. The compliance matrix logic is explicit and copy-paste ready.

3 / 3

Workflow Clarity

Excellent multi-step sequencing with clear numbered steps, explicit validation checkpoints (build must pass, tests must pass, compliance matrix cross-references test results), severity flagging (CRITICAL/WARNING/SUGGESTION), and a final verdict gate. Feedback loops are present — e.g., Step 6 cross-references Step 5b results, and the flag system clearly defines blocking vs non-blocking issues.

3 / 3

Progressive Disclosure

References to shared skills (`sdd-phase-common.md` Sections A-D, `openspec-convention.md`) are one-level deep and clearly signaled. However, the report template is very long and inline — it could be split into a separate reference file. The skill itself is quite long (~200+ lines) with the full report template embedded, making it harder to scan.

2 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
Gentleman-Programming/agent-teams-lite
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.