CtrlK
BlogDocsLog inGet started
Tessl Logo

recipe-review

Design Doc compliance and security validation with optional auto-fixes

56

Quality

47%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/recipe-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a domain (Design Doc compliance/security) but is too terse and lacks explicit trigger guidance. It does not enumerate specific actions performed and omits a 'Use when...' clause, making it difficult for Claude to reliably select this skill from a large pool. Adding concrete actions and explicit trigger conditions would significantly improve it.

Suggestions

Add a 'Use when...' clause specifying trigger scenarios, e.g., 'Use when the user asks to review a design document for compliance, security issues, or policy violations.'

List specific concrete actions such as 'Validates design documents against security policies, checks for compliance gaps, flags vulnerabilities, and optionally applies auto-fixes to resolve common issues.'

Include natural trigger terms users might say, such as 'design document', 'security review', 'compliance check', 'design review', or 'policy validation'.

DimensionReasoningScore

Specificity

Names the domain ('Design Doc compliance and security validation') and mentions one action ('auto-fixes'), but does not list multiple specific concrete actions like what compliance checks are performed, what security validations occur, or what the auto-fixes entail.

2 / 3

Completeness

Partially addresses 'what' (compliance and security validation with auto-fixes) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also weak, so this scores a 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords like 'Design Doc', 'compliance', 'security validation', and 'auto-fixes', but misses common user variations such as 'design document', 'security review', 'lint', 'policy check', or file type references that users might naturally say.

2 / 3

Distinctiveness Conflict Risk

'Design Doc' provides some specificity, but 'compliance and security validation' is broad enough to overlap with general security review skills, linting tools, or other compliance-checking skills. The scope is not clearly delineated.

2 / 3

Total

7

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a well-structured orchestration workflow with clear sequencing, decision points, and validation loops for design doc compliance and security review. Its main weaknesses are moderate verbosity in describing sub-agent invocations and a reliance on abstract orchestration patterns rather than fully executable examples. The workflow clarity is strong with explicit checkpoints and error handling paths.

Suggestions

Reduce verbosity by templating the repeated sub-agent invocation pattern once and referencing it, rather than spelling out the full Agent tool parameters for each of the 6+ sub-agent calls.

Remove the 'Orchestrator Definition' identity statement — Claude doesn't need to be told it's an orchestrator; just provide the execution instructions directly.

DimensionReasoningScore

Conciseness

The skill is moderately efficient but includes some unnecessary verbosity, such as the 'Orchestrator Definition' section explaining its identity and the repeated structural patterns for sub-agent invocations that could be templated. Some sections like 'Auto-fixable Items' and 'Non-fixable Items' are useful but could be tighter.

2 / 3

Actionability

The skill provides concrete sub-agent invocation patterns with specific parameters and structured output formats, but relies heavily on abstract orchestration concepts (e.g., 'Store output as $STEP_2_OUTPUT') without showing how this actually works in practice. The bash commands in Step 1 are executable, but most other steps are descriptions of agent invocations rather than directly executable code.

2 / 3

Workflow Clarity

The 11-step workflow is clearly sequenced with explicit decision points (Step 4 verdict logic with blocked/pass/fail criteria), validation checkpoints (Steps 9-10 re-validation), and a feedback loop (fix → quality check → re-validate). The blocked security finding early-exit is a good safety checkpoint.

3 / 3

Progressive Disclosure

The content is well-structured with clear sections and headers, but it's a fairly long monolithic document that could benefit from splitting detailed sub-agent invocation patterns or the report templates into separate reference files. The reference to 'documentation-criteria' skill in Step 5 is a good cross-reference but other opportunities for splitting are missed.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
shinpr/claude-code-workflows
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.