Agent skill for reviewer - invoke with $agent-reviewer
41
13%
Does it follow best practices?
Impact
81%
1.15xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-reviewer/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an extremely weak description that fails on all dimensions. It provides no information about what the skill does, when it should be used, or what domain it operates in. It reads more like a label than a description and would be nearly useless for skill selection among multiple options.
Suggestions
Describe the specific actions this skill performs (e.g., 'Reviews pull requests for code quality, checks for bugs, suggests improvements, and validates adherence to coding standards').
Add an explicit 'Use when...' clause with natural trigger terms (e.g., 'Use when the user asks for a code review, PR feedback, or wants to check code quality').
Specify the domain clearly to distinguish from other potential review-related skills (e.g., is this for code review, document review, design review?).
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for reviewer' is extremely vague—it doesn't describe what the skill actually does (code review? document review? PR review?). | 1 / 3 |
Completeness | Neither 'what does this do' nor 'when should Claude use it' is answered. There is no 'Use when...' clause and no description of capabilities. | 1 / 3 |
Trigger Term Quality | The only potentially relevant keyword is 'reviewer', which is generic. There are no natural terms a user would say when needing this skill. The invocation syntax '$agent-reviewer' is technical jargon, not a user trigger term. | 1 / 3 |
Distinctiveness Conflict Risk | 'Reviewer' is extremely generic and could conflict with any skill related to code review, document review, PR review, or any other review-related task. There is nothing to distinguish this skill from others. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads like a comprehensive code review tutorial rather than a concise skill file for Claude. It spends most of its token budget explaining concepts Claude already knows (SQL injection, N+1 queries, SOLID principles, DRY) with illustrative examples rather than providing novel, actionable instructions. The MCP tool integration section at the end is the most valuable part, but it's buried under extensive generic content.
Suggestions
Remove all explanations of well-known concepts (SQL injection, N+1 queries, SOLID, DRY, dependency injection) — Claude already knows these. Focus only on project-specific standards and the review output format.
Extract the detailed code examples into a separate REVIEW_EXAMPLES.md file and reference it from the main skill, keeping SKILL.md as a concise overview with the review process and output format.
Move the MCP tool integration section higher and make it the core of the skill — this is the novel, project-specific content that Claude actually needs.
Add validation/feedback loop steps: what happens after issues are found, how to verify fixes were applied, and criteria for approving vs requesting changes.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~200+ lines. Explains basic concepts Claude already knows well (SOLID principles, DRY, SQL injection, N+1 queries, dependency injection). The security checklist, performance checks, and code quality examples are all standard knowledge that don't need to be taught. Most of this content is a generic code review tutorial rather than skill-specific instructions. | 1 / 3 |
Actionability | Contains concrete code examples and a specific review feedback format template, which is useful. However, much of the content is illustrative rather than executable — the TypeScript examples show common patterns but aren't actionable instructions for performing a review. The MCP tool integration section provides specific tool calls, which adds some actionability. | 2 / 3 |
Workflow Clarity | The review process is broken into 5 numbered steps (functionality, security, performance, code quality, maintainability), providing a clear sequence. However, there are no validation checkpoints or feedback loops — no guidance on what to do when issues are found during review, how to verify fixes, or when to escalate. The process is more of a checklist than a workflow with decision points. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with everything inline. All examples, checklists, guidelines, and MCP integration are crammed into a single file with no references to external files. The extensive code examples for security, performance, and quality could easily be split into separate reference documents, keeping the main skill lean. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
01070ed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.