Agent skill for reviewer - invoke with $agent-reviewer
37
7%
Does it follow best practices?
Impact
81%
1.15xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-reviewer/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an extremely weak description that provides virtually no useful information for skill selection. It fails on every dimension: it names no concrete actions, includes no natural trigger terms, answers neither 'what' nor 'when', and is too generic to be distinguishable from other skills. It reads more like a label than a description.
Suggestions
Describe specific concrete actions the skill performs, e.g., 'Reviews pull requests for code quality, checks for bugs, suggests improvements, and validates adherence to coding standards.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks for a code review, PR feedback, review of changes, or wants someone to check their code.'
Remove the invocation syntax ('$agent-reviewer') from the description—it doesn't help Claude decide when to select this skill—and replace it with domain-specific keywords that distinguish this from other review-related skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for reviewer' is extremely vague—it doesn't describe what the skill actually does (code review? document review? PR review?). | 1 / 3 |
Completeness | Neither 'what does this do' nor 'when should Claude use it' is answered. There is no 'Use when...' clause and no description of capabilities. | 1 / 3 |
Trigger Term Quality | The only potentially relevant keyword is 'reviewer', which is generic. There are no natural terms a user would say when needing this skill. The invocation syntax '$agent-reviewer' is technical jargon, not a user trigger term. | 1 / 3 |
Distinctiveness Conflict Risk | 'Reviewer' is extremely generic and could conflict with any review-related skill (code review, PR review, document review, design review, etc.). There is nothing to distinguish this skill from others. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
14%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads like a generic code review tutorial rather than a focused agent instruction set. It spends most of its token budget explaining well-known programming concepts (SQL injection, N+1 queries, SOLID principles, DRY) that Claude already understands, while lacking a clear operational workflow for how the reviewer agent should actually conduct a review. The MCP tool integration section at the end contains the most skill-specific content but is buried under hundreds of lines of generic material.
Suggestions
Remove all generic code review knowledge (SOLID, DRY, SQL injection examples, etc.) that Claude already knows, and focus only on the specific review process, output format, and MCP coordination unique to this agent role.
Define a clear sequential workflow: receive task → read relevant files → analyze → store findings via MCP → produce structured review output → validate findings are addressed.
Split the review feedback format template and MCP integration examples into separate referenced files to reduce the main skill's token footprint.
Add explicit validation checkpoints, e.g., 'Before submitting review, verify all critical issues have suggested fixes and are stored in memory coordination.'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~250+ lines. Explains basic concepts Claude already knows (SOLID principles, DRY, SQL injection, N+1 queries, dependency injection). The security checklist, performance patterns, and code quality examples are all standard knowledge that don't need to be taught. Most of the content is a generic code review tutorial rather than skill-specific instructions. | 1 / 3 |
Actionability | Provides concrete code examples for common issues and fixes, and includes a review feedback format template. However, much of it is illustrative rather than executable — the MCP tool integration examples use pseudo-JavaScript that isn't directly executable, and the skill lacks specific instructions on how to actually perform a review (e.g., which files to read, what tools to invoke, what output to produce). | 2 / 3 |
Workflow Clarity | The numbered sections (1-5) describe review categories but not a clear sequential workflow. There's no explicit process for: receiving code to review, iterating on findings, validating fixes, or completing the review. No validation checkpoints or feedback loops for the review process itself. The 'Review Process' is really a list of review dimensions, not an actionable workflow. | 1 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. All content — from basic examples to MCP integration — is inlined in a single massive document. Content like the detailed code examples, security checklist, and review feedback template could easily be split into separate reference files. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
9d4a9ea
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.