Conduct high-quality, persona-driven code reviews. Use when reviewing PRs, critiquing code quality, or analyzing changes for team feedback.
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.github/skills/common/common-code-review/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid description that clearly communicates both what the skill does and when to use it, with good trigger terms. Its main weaknesses are the somewhat vague 'persona-driven' qualifier that isn't explained, and the lack of more specific concrete actions beyond the general 'reviewing/critiquing/analyzing' verbs. It could also be more distinctive to avoid overlap with general code analysis skills.
Suggestions
Elaborate on what 'persona-driven' means concretely (e.g., 'simulates reviewers with different expertise levels' or 'adopts senior engineer, security specialist, or performance expert personas')
Add more specific actions to increase specificity, such as 'identifies bugs, suggests refactors, checks naming conventions, flags security concerns'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (code reviews) and mentions some actions like 'reviewing PRs', 'critiquing code quality', and 'analyzing changes', but doesn't list specific concrete actions like checking for security issues, suggesting refactors, enforcing style guides, etc. 'Persona-driven' is somewhat vague without elaboration. | 2 / 3 |
Completeness | Clearly answers both 'what' (conduct persona-driven code reviews) and 'when' (Use when reviewing PRs, critiquing code quality, or analyzing changes for team feedback) with an explicit 'Use when...' clause. | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms users would say: 'code reviews', 'PRs', 'code quality', 'changes', 'team feedback'. These cover common variations of how users would request code review assistance. | 3 / 3 |
Distinctiveness Conflict Risk | While 'code reviews' and 'PRs' are fairly specific, the phrase 'critiquing code quality' could overlap with general code analysis or linting skills. The 'persona-driven' aspect adds some distinctiveness but isn't well-defined enough to fully differentiate it. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, concise skill that effectively defines a code review persona with clear severity classifications and a mandatory checklist. Its main weakness is the lack of a concrete, worked example showing a complete review comment (input diff → output review), which would significantly boost actionability. The workflow could also benefit from an explicit step-by-step process for conducting the review.
Suggestions
Add a concrete worked example showing an actual code diff snippet and the corresponding review output using the [SEVERITY] format, to make the output format fully actionable.
Add an explicit review workflow sequence (e.g., 1. Read full diff for context → 2. Run through checklist per file → 3. Categorize and format findings → 4. Verify all checklist items addressed) to improve workflow clarity.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Every line earns its place. No unnecessary explanations of what code review is or how it works. Assumes Claude's competence as a principal engineer and focuses only on the specific behavioral constraints and output format. | 3 / 3 |
Actionability | The output format template and severity labels are concrete and actionable, and the checklist provides specific items to check. However, the 'Fix' line in the output format is described abstractly rather than shown with a real example (e.g., an actual code review comment with before/after code), which would make it more copy-paste ready. | 2 / 3 |
Workflow Clarity | The checklist provides a clear sequence of what to review, and the output format defines how to present findings. However, there's no explicit workflow for the review process itself (e.g., read diff → run checklist → categorize findings → format output), and no validation step to verify completeness of the review. | 2 / 3 |
Progressive Disclosure | Clean overview with well-signaled one-level-deep references to checklist.md and output-format.md. The main skill file stays concise while pointing to detailed materials for the full checklist and output templates. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
metadata_field | 'metadata' should map string keys to string values | Warning |
Total | 9 / 11 Passed | |
4c72e76
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.