Code review checklist - coordinates specialist reviewers for thorough analysis
54
42%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./dot_config/opencode/skill/review-checklist/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is too vague to effectively guide skill selection. While 'code review' is a useful trigger term, the description fails to specify what concrete actions the skill performs (e.g., checking for security issues, style violations, performance problems) and lacks any explicit 'Use when...' guidance. The phrase 'coordinates specialist reviewers' is unclear about the actual mechanism or output.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Reviews code for security vulnerabilities, performance issues, style violations, and maintainability concerns using a structured checklist approach.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks for a code review, PR review, pull request feedback, or wants their code checked for issues.'
Clarify what 'coordinates specialist reviewers' means in practice — does it run multiple analysis passes? List the specialist areas (security, performance, readability, etc.) to improve distinctiveness.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description says 'coordinates specialist reviewers for thorough analysis' which is vague. It doesn't list any concrete actions like 'checks for security vulnerabilities, identifies performance issues, validates coding standards.' The phrase 'thorough analysis' is abstract fluff. | 1 / 3 |
Completeness | The 'what' is weak — 'coordinates specialist reviewers' is vague about what it actually does. There is no 'when' clause or explicit trigger guidance, which per the rubric should cap completeness at 2, but since the 'what' is also weak, this scores a 1. | 1 / 3 |
Trigger Term Quality | 'Code review' is a natural keyword users would say, and 'checklist' is somewhat relevant. However, it's missing common variations like 'PR review', 'pull request', 'review my code', 'code quality', or 'lint'. | 2 / 3 |
Distinctiveness Conflict Risk | 'Code review checklist' is somewhat specific to a code review workflow, but 'coordinates specialist reviewers for thorough analysis' could overlap with general code analysis, linting, or other review-related skills. The lack of specificity about what kind of review increases conflict risk. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a highly actionable and well-structured coordination skill with excellent workflow clarity and concrete executable guidance. Its primary weakness is extreme verbosity — at 300+ lines it consumes significant context window for what could be expressed more concisely. The content would benefit from splitting detailed sections (verification tables, output templates, dispatch rules) into referenced sub-files.
Suggestions
Reduce verbosity by 40-50%: compress the verification pass table, dispatch signal table, and escalation handling into terser formats — Claude can infer coordination patterns from shorter instructions.
Split the output format templates, dispatch strategy details, and verification pass into separate referenced files (e.g., REVIEW-OUTPUT.md, DISPATCH.md) to improve progressive disclosure and reduce the main file's token footprint.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | This skill is extremely verbose at ~300+ lines. It includes extensive explanations of coordination logic, detailed tables, and lengthy process descriptions that could be significantly condensed. Many sections explain orchestration concepts that Claude can infer from shorter instructions. | 1 / 3 |
Actionability | The skill provides highly concrete, executable guidance throughout: specific bash commands (gh pr checkout, git blame, rg), exact API calls (gh api repos/{owner}/{repo}/pulls/<PR_NUMBER>/comments), specific tool detection patterns, dispatch templates with exact payloads, and a complete output format with markdown examples. | 3 / 3 |
Workflow Clarity | The multi-step workflow is clearly sequenced: PR checkout → context gathering → static analysis → dispatch classification → specialist spawning → escalation handling → merge/verify → output formatting. Validation checkpoints are explicit (verification pass with disprove-or-keep logic, QA execution, branch restoration). Feedback loops exist for escalation routing and error recovery. | 3 / 3 |
Progressive Disclosure | The skill references external skills (review-correctness, review-completeness, etc.) and other files (context-log.md, AGENTS.md), which is good progressive disclosure. However, the main file itself is monolithic — the dispatch strategy, verification pass, output format, and coordinator checks could be split into referenced sub-documents. The inline content is very long for a single SKILL.md. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
03cec9d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.