Run the local review gate before pushing.
55
55%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is extremely terse and vague, failing to communicate what specific actions the skill performs, what tools or checks are involved, or when Claude should select it. It reads more like a reminder note than a skill description and would be nearly impossible for Claude to correctly match against user requests in a multi-skill environment.
Suggestions
Specify the concrete actions the review gate performs (e.g., 'Runs linting, unit tests, and type-checking on staged changes before pushing to a remote branch').
Add an explicit 'Use when...' clause with natural trigger terms like 'pre-push checks', 'run tests before push', 'review gate', 'CI checks locally', 'validate before pushing'.
Clarify the scope and tooling involved (e.g., which languages, test frameworks, or linters) to make the skill clearly distinguishable from general code review or testing skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description is vague — 'run the local review gate' does not explain what concrete actions are performed (e.g., linting, testing, type-checking). No specific capabilities are listed. | 1 / 3 |
Completeness | The description barely addresses 'what' (run some unspecified review gate) and has no explicit 'when' clause or trigger guidance. Both dimensions are very weak. | 1 / 3 |
Trigger Term Quality | 'Local review gate' and 'pushing' are somewhat technical but not natural terms a user would say. Users might say 'pre-push checks', 'run tests before push', 'CI checks', or 'lint my code' — none of which are covered. | 1 / 3 |
Distinctiveness Conflict Risk | The description is so vague it could overlap with any skill related to code review, CI/CD, testing, linting, or git workflows. There are no distinct triggers to differentiate it. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
87%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
A strong, actionable skill with concrete, version-pinned commands and a clear sequential workflow. The main weakness is the lack of an explicit feedback loop—after fixing failures in step 7, the skill should instruct to re-run the relevant checks to confirm the fixes pass before proceeding.
Suggestions
Add an explicit re-validation step after step 7: 'Re-run steps 3–6 to confirm all failures are resolved before pushing.'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Every line serves a purpose—specific tool versions, exact commands, no unnecessary explanation of what linting or syntax checking is. Lean and efficient. | 3 / 3 |
Actionability | All commands are fully executable and copy-paste ready with specific version pins, exact flags, and concrete directory paths. No pseudocode or vague instructions. | 3 / 3 |
Workflow Clarity | Steps are clearly sequenced and numbered, but the validation/feedback loop is weak—step 7 says 'fix any failures before continuing' but doesn't specify a re-run cycle or how to verify fixes pass. For a multi-step process with potentially destructive operations (pushing code), explicit re-validation after fixing is expected. | 2 / 3 |
Progressive Disclosure | This is a simple, single-purpose skill under 50 lines. The content is well-organized with numbered steps and clear bash blocks. No need for external references or deeper structure. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Reviewed
Table of Contents