Design Doc compliance and security validation with optional auto-fixes
56
47%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/recipe-front-review/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a domain (Design Doc compliance/security) but is too terse and lacks explicit trigger guidance. It does not enumerate specific actions performed and omits a 'Use when...' clause, making it difficult for Claude to reliably select this skill from a large pool. Adding concrete actions and explicit trigger conditions would significantly improve it.
Suggestions
Add a 'Use when...' clause specifying trigger scenarios, e.g., 'Use when the user asks to review a design document for compliance, security issues, or policy violations.'
List specific concrete actions such as 'Validates design documents against security policies, checks for compliance gaps, flags vulnerabilities, and optionally applies auto-fixes to resolve common issues.'
Include natural trigger terms users might say, such as 'design document', 'security review', 'compliance check', 'design review', or 'policy validation'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain ('Design Doc compliance and security validation') and mentions one action ('auto-fixes'), but does not list multiple specific concrete actions like what compliance checks are performed, what security validations occur, or what the auto-fixes entail. | 2 / 3 |
Completeness | Partially addresses 'what' (compliance and security validation with auto-fixes) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also weak, so this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'Design Doc', 'compliance', 'security validation', and 'auto-fixes', but misses common user variations such as 'design document', 'security review', 'lint', 'policy check', or file type references that users might naturally say. | 2 / 3 |
Distinctiveness Conflict Risk | 'Design Doc' provides some specificity, but 'compliance and security validation' is broad enough to overlap with general security review skills, linting tools, or other compliance-checking skills. The scope is not clearly delineated. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a well-structured orchestration workflow for design doc compliance and security validation with clear sequencing, validation checkpoints, and error handling paths. Its main weaknesses are moderate verbosity in sub-agent invocation descriptions that follow a repetitive pattern, and reliance on external sub-agents and references without clear links. The workflow clarity is strong with explicit feedback loops and blocking conditions.
Suggestions
Condense the repetitive sub-agent invocation blocks into a table or template format to reduce token usage (e.g., a single example followed by a parameter table for each step)
Add explicit links to referenced external resources like the 'documentation-criteria skill' and 'task template' rather than mentioning them by name only
Remove filler phrases like 'Think deeply' and 'Understand the essence of compliance validation' which add no actionable value
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is moderately efficient but includes some unnecessary verbosity like 'Think deeply' and 'Understand the essence of compliance validation'. The sub-agent invocation blocks are repetitive in structure and could be condensed. However, most content is functional and relevant. | 2 / 3 |
Actionability | The skill provides concrete sub-agent invocation patterns with specific parameters and structured output formats, but relies heavily on placeholder variables ($STEP_2_OUTPUT) and references to external sub-agents without showing their actual behavior. The bash commands in step 1 are executable, but the core workflow depends on agent tool invocations that are described rather than fully specified. | 2 / 3 |
Workflow Clarity | The multi-step workflow is clearly sequenced with numbered steps, explicit validation checkpoints (re-validate after fixes in steps 9-10), a blocking condition for security failures (step 4), user confirmation gates, and a clear feedback loop (fix → quality check → re-validate). The branching logic for pass/fail and fix/no-fix paths is well-defined. | 3 / 3 |
Progressive Disclosure | The content is structured with clear sections and headers, but it's a fairly long monolithic document that could benefit from splitting detailed sub-agent specifications or the fix workflow into separate files. It references external resources (documentation-criteria skill, task template) but doesn't provide clear navigation links. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
2e719be
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.