Elite code review expert specializing in modern AI-powered code
Install with Tessl CLI
npx tessl i github:sickn33/antigravity-awesome-skills --skill code-reviewer36
Quality
3%
Does it follow best practices?
Impact
96%
1.12xAverage score across 3 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/code-reviewer/SKILL.mdDiscovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is severely underdeveloped, reading more like a marketing tagline than a functional skill description. It lacks concrete actions, natural trigger terms, explicit usage guidance, and any distinguishing characteristics that would help Claude select it appropriately from a pool of skills.
Suggestions
Replace vague 'Elite code review expert' with specific actions like 'Reviews code for bugs, security vulnerabilities, performance issues, and style violations'
Add a 'Use when...' clause with natural trigger terms such as 'review my code', 'check this PR', 'pull request feedback', 'code quality', or 'find bugs in my code'
Specify what makes this distinct from general coding skills - e.g., focus on specific languages, frameworks, or review methodologies
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'Elite code review expert' and 'specializing in' without listing any concrete actions. No specific capabilities like 'identifies bugs', 'suggests refactors', or 'checks style' are mentioned. | 1 / 3 |
Completeness | The description fails to answer both 'what does this do' (no concrete actions) and 'when should Claude use it' (no 'Use when...' clause or trigger guidance). Both components are very weak or missing. | 1 / 3 |
Trigger Term Quality | Contains only generic terms 'code review' and 'AI-powered code' which are overly broad. Missing natural user phrases like 'review my PR', 'check this code', 'code feedback', or 'pull request'. | 1 / 3 |
Distinctiveness Conflict Risk | Extremely generic - 'code review' and 'AI-powered code' could conflict with numerous other coding-related skills. No distinct niche or specific triggers to differentiate it. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
7%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content is essentially a persona description or capability manifest rather than actionable instructions. It extensively lists what a code reviewer should know and do but provides zero concrete guidance on how to actually perform code reviews. The content would benefit from being replaced with specific workflows, tool commands, and code examples.
Suggestions
Replace capability lists with concrete, executable examples showing actual code review patterns (e.g., specific SonarQube commands, example review comments, actual security vulnerability detection code)
Add a clear workflow with validation steps: e.g., '1. Run `semgrep --config=auto .` 2. Check output for HIGH severity 3. For each finding, verify with manual review'
Remove sections that describe what Claude already knows (OWASP Top 10, SOLID principles, etc.) and focus only on project-specific patterns or tool configurations
Move the extensive capability lists to a reference file and keep SKILL.md focused on the essential quick-start workflow for performing a code review
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive lists of capabilities, tools, and concepts that Claude already knows. The content reads like a marketing document rather than actionable instructions, with massive enumeration of technologies and practices that add no instructional value. | 1 / 3 |
Actionability | No concrete code examples, commands, or executable guidance. The entire skill consists of abstract capability lists and vague behavioral descriptions like 'Analyze code context' without any specific steps, tools invocations, or code snippets to follow. | 1 / 3 |
Workflow Clarity | The 'Response Approach' section lists 10 high-level steps but provides no concrete validation checkpoints, no specific commands, and no feedback loops. Steps like 'Apply automated tools' give no indication of which tools or how to use them. | 1 / 3 |
Progressive Disclosure | References `resources/implementation-playbook.md` for detailed examples, which is appropriate progressive disclosure. However, the main content is a monolithic wall of capability lists that should be restructured or moved to reference files. | 2 / 3 |
Total | 5 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.