Comprehensive code review skill for TypeScript, JavaScript, Python, Swift, Kotlin, Go. Includes automated code analysis, best practice checking, security scanning, and review checklist generation. Use when reviewing pull requests, providing code feedback, identifying issues, or ensuring code quality standards.
59
49%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/code-reviewer/SKILL.mdQuality
Discovery
92%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly communicates its purpose, lists concrete capabilities, and includes explicit trigger guidance. The main weakness is that its broad scope across multiple languages and overlapping concerns (security, best practices, quality) could create conflicts with more specialized skills in a large skill library. Overall, it follows best practices well with third-person voice and natural trigger terms.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'automated code analysis', 'best practice checking', 'security scanning', and 'review checklist generation'. Also specifies the supported languages explicitly. | 3 / 3 |
Completeness | Clearly answers both 'what' (code analysis, best practice checking, security scanning, checklist generation for multiple languages) and 'when' with an explicit 'Use when...' clause covering pull requests, code feedback, identifying issues, and ensuring quality standards. | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms users would say: 'pull requests', 'code feedback', 'code review', 'code quality', 'security scanning'. Also lists specific languages (TypeScript, JavaScript, Python, Swift, Kotlin, Go) which users would naturally mention. | 3 / 3 |
Distinctiveness Conflict Risk | While it specifies code review as its niche, terms like 'code quality', 'best practice checking', and 'identifying issues' could overlap with linting skills, security-focused skills, or general coding assistance skills. The scope is broad across six languages, which increases potential conflict with language-specific skills. | 2 / 3 |
Total | 11 / 12 Passed |
Implementation
7%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a generic template with placeholder content rather than a genuine code review skill. It lacks any specific, actionable code review guidance—no concrete review criteria, no example findings, no real script behavior descriptions, and no meaningful workflow. The content is padded with boilerplate phrases and truisms that waste tokens without teaching Claude anything it doesn't already know.
Suggestions
Replace generic feature bullet points with concrete examples: show a sample code snippet, the specific issue found, and the recommended fix for each language supported.
Document what each script actually does with real input/output examples—show sample command invocations with actual arguments and example output formats.
Add a concrete code review workflow with validation checkpoints: e.g., 1) Run static analysis, 2) Check output for severity levels, 3) Verify security findings, 4) Generate report with specific format.
Remove the generic best practices section ('Write clear code', 'Keep it simple') and the tech stack listing—these add no value. Replace with specific review heuristics or checklists that are unique to this skill.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with generic, boilerplate content that adds no value. Feature lists like 'Automated scaffolding', 'Best practices built-in', 'Deep analysis', 'Expert-level automation' are meaningless filler. The best practices section restates things Claude already knows ('Write clear code', 'Keep it simple'). The tech stack listing is unnecessary padding. | 1 / 3 |
Actionability | Despite referencing scripts, there are no concrete examples of actual code review guidance, no real command outputs, no specific review criteria, and no executable examples. The script invocations use vague placeholders like '[options]' and '[arguments]'. Nothing is copy-paste ready or demonstrates what the scripts actually do or produce. | 1 / 3 |
Workflow Clarity | The 'Development Workflow' section lists generic steps (install, run, follow docs) with no validation checkpoints, no feedback loops, and no clear sequence for performing an actual code review. There's no guidance on what to do when issues are found, how to prioritize findings, or how to handle review iterations. | 1 / 3 |
Progressive Disclosure | The skill does reference external files (references/code_review_checklist.md, references/coding_standards.md, references/common_antipatterns.md) which is appropriate structure. However, the descriptions of what those files contain are vague and generic ('Detailed patterns and practices', 'Step-by-step processes'), making navigation unhelpful. The main file itself contains too much filler that should either be cut or moved. | 2 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
7aff694
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.