Use when completing tasks, implementing major features, or before merging to verify work meets requirements
36
31%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/requesting-code-review/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is extremely weak across all dimensions. It only vaguely describes when to use the skill (before merging, when completing tasks) but never explains what the skill actually does. The language is so generic that it could apply to almost any development-related skill, making it nearly useless for skill selection.
Suggestions
Add a clear 'what' clause describing the concrete actions this skill performs (e.g., 'Runs test suites, checks code coverage, validates linting rules, and verifies build success').
Replace generic phrases like 'completing tasks' and 'implementing major features' with specific trigger terms users would naturally say (e.g., 'run tests', 'check CI', 'verify build', 'pre-merge checks').
Restructure to follow the pattern: '[Specific actions]. Use when [specific triggers].' to clearly separate capabilities from activation conditions.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'completing tasks' and 'implementing major features' without specifying any concrete actions. It doesn't describe what the skill actually does—only when to use it. | 1 / 3 |
Completeness | The description addresses 'when' (before merging, when completing tasks) but completely fails to answer 'what does this do'. There is no indication of the skill's actual capabilities or actions. | 1 / 3 |
Trigger Term Quality | The terms 'completing tasks', 'implementing major features', and 'merging' are extremely generic. 'Verify work meets requirements' is slightly more specific but still lacks natural keywords a user would say. There are no distinctive trigger terms. | 1 / 3 |
Distinctiveness Conflict Risk | Phrases like 'completing tasks' and 'implementing major features' are so generic they could apply to virtually any skill. There is nothing distinctive that would help Claude differentiate this from other skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a solid workflow for requesting code reviews via subagents with clear sequencing and priority-based feedback handling. Its main weaknesses are the vague dispatch mechanism (the most critical step lacks executable detail) and some verbosity in sections that state obvious best practices. The template reference is good progressive disclosure but cannot be verified without bundle files.
Suggestions
Make the dispatch step fully actionable by showing the exact Task tool invocation syntax with the filled template, rather than just naming the tool type and template path
Remove or significantly trim the 'Red Flags' section — advice like 'never ignore Critical issues' and 'never skip review because it's simple' doesn't add value for Claude
Include the code-reviewer.md template as a bundle file so the reference is verifiable and the skill is self-contained
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Generally efficient but includes some unnecessary sections like 'Red Flags' with obvious advice ('Never skip review because it's simple', 'Ignore Critical issues') that Claude already knows. The example section is somewhat verbose and could be tightened. | 2 / 3 |
Actionability | Provides concrete bash commands for getting SHAs and a clear placeholder template system, but the actual dispatch mechanism is vague ('Use Task tool with general-purpose type, fill template at code-reviewer.md') — the critical step of how to invoke the subagent lacks executable specificity. The example uses informal pseudo-conversation rather than concrete commands. | 2 / 3 |
Workflow Clarity | The workflow is clearly sequenced: get SHAs → dispatch reviewer with template → act on feedback with explicit priority tiers (Critical/Important/Minor). The feedback loop is well-defined with clear criteria for when to fix vs. defer vs. push back, and integration with different workflow contexts is well-organized. | 3 / 3 |
Progressive Disclosure | References the template at 'requesting-code-review/code-reviewer.md' which is appropriate one-level-deep disclosure, but no bundle files were provided to verify the reference exists. The skill itself contains sections (Integration with Workflows, Red Flags) that could arguably be trimmed or split out, and the reference to 'docs/superpowers/plans/deployment-plan.md' in the example adds confusion about the file structure. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
f2cbfbe
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.