Use when the user asks to fix open reviews, invokes /roborev-fix, or provides job IDs; do not use when the user only pastes review findings with no request to discover or close reviews
80
74%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./internal/skills/claude/roborev-fix/SKILL.mdQuality
Discovery
72%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description excels at defining when to use (and when not to use) the skill, with strong trigger terms including a specific slash command. However, it is weak on explaining what the skill actually does—the concrete actions and capabilities are not enumerated. The negative trigger clause is a notable strength for disambiguation.
Suggestions
Add explicit capability statements describing what the skill does (e.g., 'Discovers open code reviews from CI jobs, applies fixes, and closes review findings' or similar concrete actions).
Expand the 'what' portion to list specific actions like fetching review data, generating fixes, submitting changes, etc., so Claude can confidently select this skill.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description mentions 'fix open reviews' and 'discover or close reviews' which hint at concrete actions, but it doesn't list specific capabilities comprehensively—what does 'fix' entail? What steps are performed? The actions remain somewhat vague. | 2 / 3 |
Completeness | The 'when' clause is explicit and well-defined (including both positive and negative triggers), but the 'what' is weak—it never clearly states what the skill actually does beyond vague references to fixing/discovering/closing reviews. | 2 / 3 |
Trigger Term Quality | Includes strong natural trigger terms: 'fix open reviews', '/roborev-fix', 'job IDs', and the negative trigger distinguishing it from simply pasting review findings. These are terms users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | The description is highly distinctive with the specific command '/roborev-fix', 'job IDs', and the explicit negative trigger clause that clearly delineates when NOT to use this skill, making conflicts with other skills unlikely. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, highly actionable skill with clear workflow sequencing and proper validation checkpoints. Its main weakness is verbosity — the extensive inline examples and detailed edge-case handling (pasted findings matching, closed review warnings) make it longer than necessary, though the content is generally useful. The skill excels at providing concrete, executable commands and clear decision logic at every step.
Suggestions
Move the three detailed examples to a separate EXAMPLES.md file and reference it from the main skill, keeping only a brief usage summary inline.
Tighten step 1 by consolidating the three conditional paths (pasted findings, job IDs provided, neither) into a more compact decision table or flowchart-style format.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient and avoids explaining concepts Claude already knows, but it's quite lengthy for what it does. Some sections are repetitive (e.g., the 'When NOT to invoke' section partially overlaps with the description/frontmatter intent, and the step 1 logic for pasted findings is verbose). The examples section, while helpful, adds significant length. | 2 / 3 |
Actionability | The skill provides fully concrete, executable bash commands at every step (roborev show, roborev fix --open --list, roborev comment, roborev close, git show, go test). JSON field names are specified, decision logic is explicit, and the workflow leaves no ambiguity about what to run and when. | 3 / 3 |
Workflow Clarity | The 6-step workflow is clearly sequenced with explicit validation checkpoints: step 4 runs tests and requires fixing regressions before proceeding, step 5 only closes after confirming the comment succeeded, and step 1 has clear error handling and fallback paths. The feedback loop (test → fix regressions → proceed) is well-defined for this destructive/batch operation. | 3 / 3 |
Progressive Disclosure | The skill is self-contained with a 'See also' reference to a related skill, which is good. However, at ~150 lines with three detailed examples inline, some content (like the examples section) could be split into a separate file. The structure is clear with good headers, but the inline examples make it longer than ideal for a SKILL.md overview. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
2c9749e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.