Elite code review expert specializing in modern AI-powered code
36
3%
Does it follow best practices?
Impact
96%
1.12xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/code-reviewer/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is extremely weak across all dimensions. It reads more like a marketing tagline than a functional skill description, relying on buzzwords ('elite', 'modern AI-powered') instead of concrete actions and explicit trigger conditions. It provides almost no useful information for Claude to determine when to select this skill over others.
Suggestions
Replace vague language with specific concrete actions, e.g., 'Reviews code for bugs, security vulnerabilities, performance issues, and style violations. Suggests refactors and improvements.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks for a code review, PR review, pull request feedback, or wants code quality analysis.'
Remove marketing fluff like 'Elite' and 'modern AI-powered' and instead specify what types of code or languages are covered to create a distinct niche.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague, buzzword-heavy language ('Elite code review expert', 'modern AI-powered code') without listing any concrete actions like 'reviews pull requests', 'identifies bugs', or 'suggests refactors'. | 1 / 3 |
Completeness | The 'what' is extremely vague (no specific actions listed) and there is no 'when' clause or explicit trigger guidance whatsoever. | 1 / 3 |
Trigger Term Quality | 'Code review' is a relevant keyword, but 'elite' and 'modern AI-powered code' are not natural terms users would say. Missing common variations like 'PR review', 'pull request', 'review my code', 'code feedback'. | 1 / 3 |
Distinctiveness Conflict Risk | 'Code review' is broad and could overlap with many coding-related skills. The phrase 'AI-powered code' adds confusion rather than clarity about the skill's niche. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
7%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads like a persona description or marketing document rather than an actionable skill file. It extensively lists capabilities, tools, and behavioral traits that Claude already knows, consuming enormous token budget without providing any concrete, executable guidance. The content needs a complete rewrite focused on specific, actionable review workflows with concrete code examples and validation steps.
Suggestions
Replace the extensive capability lists with a concise checklist of specific review steps Claude should follow, including concrete commands for running static analysis tools (e.g., `npx eslint --fix .`, `semgrep --config auto`).
Add concrete code review examples showing input code and expected review output format, including severity levels, specific feedback comments, and suggested fixes.
Define a clear, validated workflow with explicit checkpoints, e.g., '1. Run linter → 2. Check output for errors → 3. Run security scan → 4. If vulnerabilities found, categorize by severity → 5. Generate structured review comments.'
Move the detailed tool lists, language-specific expertise, and knowledge base sections into the referenced `resources/implementation-playbook.md` file, keeping only the essential review workflow in SKILL.md.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and padded with extensive lists of capabilities, tools, and concepts that Claude already knows. The 'Capabilities' section alone spans hundreds of lines listing things like 'OWASP Top 10 vulnerability detection,' 'Clean Code principles and SOLID pattern adherence,' and 'JavaScript/TypeScript modern patterns' — all knowledge Claude already possesses. Almost no token earns its place. | 1 / 3 |
Actionability | Contains zero executable code, no concrete commands, no specific examples of how to actually perform a code review. The content is entirely descriptive — listing capabilities and behavioral traits rather than providing actionable instructions. The 'Response Approach' is a vague 10-step list with no concrete guidance on what to actually do at each step. | 1 / 3 |
Workflow Clarity | The 'Response Approach' lists 10 steps but they are vague and lack any validation checkpoints, feedback loops, or concrete sequencing. Steps like 'Apply automated tools' and 'Conduct manual review' provide no specifics on what tools, what commands, or how to validate results. No error recovery or verification steps are present. | 1 / 3 |
Progressive Disclosure | There is a single reference to 'resources/implementation-playbook.md' for detailed examples, which is a reasonable attempt at progressive disclosure. However, the main file itself is a monolithic wall of text with extensive inline content that should be split into separate reference files. The massive capabilities lists could be externalized. | 2 / 3 |
Total | 5 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
6a07b83
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.