Automatic quality control, linting, and static analysis procedures. Use after every code modification to ensure syntax correctness and project standards. Triggers onKeywords: lint, format, check, validate, types, static analysis.
89
89%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a reasonably well-constructed description that clearly communicates both purpose and trigger conditions. Its main strengths are the explicit trigger keyword list and clear 'when' guidance. Its weaknesses are the somewhat generic capability description (could list specific tools or actions) and potential overlap with other skills due to broad trigger terms like 'check' and 'validate'.
Suggestions
Add more specific concrete actions like 'run linters, check TypeScript types, enforce formatting rules, detect unused imports' to improve specificity.
Narrow broad trigger terms like 'check' and 'validate' by qualifying them (e.g., 'code check', 'syntax validate') to reduce conflict risk with unrelated skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (quality control, linting, static analysis) and some actions (ensure syntax correctness, project standards), but doesn't list specific concrete actions like 'run ESLint', 'check TypeScript types', 'format with Prettier', etc. | 2 / 3 |
Completeness | Clearly answers both 'what' (quality control, linting, static analysis for syntax correctness and project standards) and 'when' (after every code modification, plus explicit trigger keywords). The 'Use after every code modification' and 'Triggers onKeywords' clauses serve as explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Explicitly lists natural trigger keywords users would say: 'lint', 'format', 'check', 'validate', 'types', 'static analysis'. These are terms users naturally use when requesting code quality checks. | 3 / 3 |
Distinctiveness Conflict Risk | While linting and static analysis are fairly specific, terms like 'check', 'validate', and 'format' are quite broad and could overlap with other skills (e.g., data validation, document formatting). The description could conflict with testing or CI/CD skills. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
92%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, concise skill that provides actionable commands across multiple ecosystems with a clear validation loop and error recovery steps. The main weakness is a slight inconsistency between the multi-ecosystem procedures section and the Node.js-specific Quality Loop, plus the scripts table references files without clarifying how they relate to the per-ecosystem commands above.
Suggestions
Make the Quality Loop ecosystem-agnostic or provide parallel examples for Python, since the current version only shows Node.js commands (npm run lint && npx tsc --noEmit).
Briefly clarify how the scripts in the table (lint_runner.py, type_coverage.py) relate to the per-ecosystem commands—e.g., whether they wrap those commands or provide additional functionality.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is lean and efficient. Every section serves a purpose—no explanations of what linting is or why it matters. Commands are listed directly without preamble. | 3 / 3 |
Actionability | Provides specific, copy-paste-ready commands for each ecosystem (npm run lint, ruff check, npx tsc --noEmit, etc.). The scripts table gives concrete commands with paths. No pseudocode or vague instructions. | 3 / 3 |
Workflow Clarity | The Quality Loop section provides a clear sequence with an explicit validation checkpoint (step 3: Analyze Report) and a feedback loop (step 4: Fix & Repeat). Error handling covers failure scenarios with specific recovery actions. The strict rule that code cannot be submitted with failures acts as a gate. | 3 / 3 |
Progressive Disclosure | The content is well-organized with clear sections (Procedures by Ecosystem, Quality Loop, Error Handling, Scripts), but the scripts table references external files (lint_runner.py, type_coverage.py) without explaining their relationship to the inline commands. The Quality Loop hardcodes Node.js commands rather than being ecosystem-agnostic, creating slight confusion about when to use which approach. | 2 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
Total | 10 / 11 Passed | |
Reviewed
Table of Contents