tessl i github:sickn33/antigravity-awesome-skills --skill code-reviewerElite code review expert specializing in modern AI-powered code analysis, security vulnerabilities, performance optimization, and production reliability. Masters static analysis tools, security scanning, and configuration review with 2024/2025 best practices. Use PROACTIVELY for code quality assurance.
Activation
50%The description attempts to cover code review capabilities but relies heavily on marketing language ('elite', 'masters', '2024/2025 best practices') rather than concrete actions. It lacks explicit trigger guidance for when Claude should select this skill, and the broad scope creates potential conflicts with other development skills.
Suggestions
Replace vague qualifiers with specific actions: instead of 'masters static analysis tools', list concrete capabilities like 'runs ESLint, detects SQL injection, identifies race conditions'
Add explicit trigger guidance: 'Use when user asks to review code, check for security issues, optimize performance, review a PR, or mentions code quality'
Include natural user terms and file types: 'code review, PR review, security audit, .js, .py, .ts files, pull request'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names domain (code review, security, performance) and some actions (static analysis, security scanning, configuration review), but uses vague qualifiers like 'elite' and 'masters' rather than listing concrete specific actions like 'detect SQL injection vulnerabilities' or 'identify memory leaks'. | 2 / 3 |
Completeness | Has a 'what' (code review, security scanning, etc.) but the 'when' clause ('Use PROACTIVELY for code quality assurance') is vague and doesn't provide explicit triggers. It doesn't specify when users would invoke this skill or what phrases/scenarios should activate it. | 2 / 3 |
Trigger Term Quality | Includes some relevant terms like 'code review', 'security vulnerabilities', 'performance optimization', but missing common user phrases like 'review my code', 'check for bugs', 'PR review', 'code quality', or file extensions. 'AI-powered code analysis' is more marketing than natural user language. | 2 / 3 |
Distinctiveness Conflict Risk | Could overlap with general coding skills, security-specific skills, or performance optimization skills. Terms like 'code analysis' and 'performance optimization' are broad enough to conflict with other development-related skills. | 2 / 3 |
Total | 8 / 12 Passed |
Implementation
20%This skill is essentially a persona description masquerading as actionable guidance. It exhaustively lists what a code reviewer should know and do, but provides zero concrete examples, commands, or executable code. The content would be more appropriate as a job description than a skill file that teaches Claude how to perform specific tasks.
Suggestions
Replace capability lists with 2-3 concrete code review examples showing actual code input and structured review output format
Add executable commands for the static analysis tools mentioned (e.g., 'Run: semgrep --config=auto .' with expected output)
Create a specific review checklist with actionable items rather than abstract categories like 'Assess security implications'
Move the extensive capability lists to a separate reference file and keep SKILL.md focused on the core review workflow with concrete examples
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive lists of capabilities, tools, and concepts Claude already knows. The content reads like a marketing document rather than actionable instructions, with massive padding explaining what code review is and listing every possible tool and technique. | 1 / 3 |
Actionability | No concrete code examples, commands, or executable guidance. The entire skill is abstract descriptions and capability lists without any specific instructions on HOW to perform code reviews. 'Example Interactions' are just prompts, not actual examples with outputs. | 1 / 3 |
Workflow Clarity | The 'Response Approach' section provides a 10-step sequence, but steps are vague ('Apply automated tools', 'Conduct manual review') without validation checkpoints or specific actions. No feedback loops for error recovery in what should be a multi-step review process. | 2 / 3 |
Progressive Disclosure | References 'resources/implementation-playbook.md' for detailed examples, which is good progressive disclosure. However, the main content is a monolithic wall of capability lists that should be split into separate reference files, and the reference is buried in the instructions. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
75%| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata.version' is missing | Warning |
license_field | 'license' field is missing | Warning |
body_output_format | No obvious output/return/format terms detected; consider specifying expected outputs | Warning |
Total | 12 / 16 Passed | |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.