Agent skill for reviewer - invoke with $agent-reviewer
41
13%
Does it follow best practices?
Impact
81%
1.15xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-reviewer/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an extremely weak description that provides virtually no useful information for skill selection. It fails on every dimension—it doesn't explain what the skill does, when to use it, or what domain it operates in. The description reads more like a label than a functional description.
Suggestions
Describe the specific actions this skill performs (e.g., 'Reviews pull requests for code quality, checks for bugs, suggests improvements, and validates adherence to coding standards').
Add an explicit 'Use when...' clause with natural trigger terms (e.g., 'Use when the user asks for a code review, PR feedback, or wants someone to check their changes').
Specify the domain clearly to reduce conflict risk (e.g., is this for code review, document review, design review?) and include file types or contexts that distinguish it from other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for reviewer' is extremely vague—it doesn't describe what the skill actually does (code review? document review? PR review?). | 1 / 3 |
Completeness | Neither 'what does this do' nor 'when should Claude use it' is answered. There is no 'Use when...' clause and no description of capabilities. | 1 / 3 |
Trigger Term Quality | The only potentially relevant keyword is 'reviewer', which is generic. There are no natural terms a user would say when needing this skill. The invocation command '$agent-reviewer' is technical jargon, not a user trigger term. | 1 / 3 |
Distinctiveness Conflict Risk | 'Reviewer' is extremely generic and could conflict with any skill related to code review, document review, PR review, or any other review-related task. There is nothing to distinguish this skill from others. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads like a generic code review textbook rather than a targeted agent instruction. It spends most of its token budget explaining concepts Claude already knows (SOLID, DRY, SQL injection, N+1 queries) instead of providing project-specific review criteria, decision frameworks, or integration workflows. The MCP tool integration section adds some value but is buried under hundreds of lines of redundant content.
Suggestions
Remove all generic programming knowledge (SOLID, DRY, SQL injection examples, N+1 query patterns) that Claude already knows, and focus only on project-specific review standards, thresholds, and decision criteria.
Split detailed examples into separate reference files (e.g., SECURITY_PATTERNS.md, REVIEW_TEMPLATE.md) and keep SKILL.md as a concise overview with links.
Add a clear decision workflow: when to approve, when to request changes, when to escalate critical issues, with explicit validation checkpoints between steps.
Make the MCP tool integration examples use correct syntax and integrate them into the review workflow steps rather than listing them separately at the end.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~200+ lines. Explains basic concepts Claude already knows well (SOLID principles, DRY, SQL injection, N+1 queries, dependency injection). The security checklist, performance checks, and code quality examples are all standard knowledge that don't need to be taught. Most of the content is a textbook on code review rather than project-specific instructions. | 1 / 3 |
Actionability | Contains concrete code examples (SQL injection fix, N+1 query optimization, naming improvements) and a review feedback template, which are somewhat useful. However, much of it is generic best-practice advice rather than executable, project-specific guidance. The MCP tool integration section provides concrete tool calls but uses pseudo-JavaScript syntax that isn't directly executable. | 2 / 3 |
Workflow Clarity | The review process is broken into 5 numbered steps (functionality, security, performance, code quality, maintainability) which provides some sequence. However, there are no validation checkpoints, no feedback loops for when issues are found, and no clear decision points about when to approve vs. request changes. The automated checks section is disconnected from the main workflow. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with everything inline. All examples, checklists, guidelines, and MCP integration are crammed into a single file with no references to external documents. The security checklist, performance patterns, and code quality examples could each be separate reference files, keeping the main skill lean. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
398f7c2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.