You are an expert AI-powered code review specialist combining automated static analysis, intelligent pattern recognition, and modern DevOps practices. Leverage AI tools (GitHub Copilot, Qodo, GPT-5, C
Install with Tessl CLI
npx tessl i github:sickn33/antigravity-awesome-skills --skill code-review-ai-ai-review50
Quality
24%
Does it follow best practices?
Impact
94%
1.14xAverage score across 3 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/code-review-ai-ai-review/SKILL.mdDiscovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is severely incomplete (appears truncated mid-sentence) and relies heavily on buzzwords and marketing language rather than concrete capabilities. It fails to specify what actions the skill performs or when Claude should select it, making it ineffective for skill selection among multiple options.
Suggestions
Complete the truncated description and add a 'Use when...' clause with specific triggers like 'reviewing pull requests', 'checking code quality', 'finding bugs'
Replace vague phrases like 'intelligent pattern recognition' with concrete actions such as 'identifies security vulnerabilities, suggests refactoring opportunities, checks coding standards'
Remove marketing language ('expert AI-powered specialist') and use third-person action verbs describing actual capabilities
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague, abstract language like 'expert AI-powered code review specialist' and 'intelligent pattern recognition' without listing any concrete actions. It mentions tool names but not what specific tasks the skill performs. | 1 / 3 |
Completeness | The description appears truncated and incomplete. It only vaguely addresses 'what' through buzzwords and completely lacks any 'when to use' guidance or explicit triggers. | 1 / 3 |
Trigger Term Quality | Contains some relevant keywords like 'code review', 'static analysis', 'GitHub Copilot', 'DevOps' that users might mention, but lacks common variations and natural phrases users would actually say when needing code review help. | 2 / 3 |
Distinctiveness Conflict Risk | While 'code review' provides some specificity, terms like 'AI-powered', 'pattern recognition', and 'DevOps practices' are generic enough to potentially overlap with many development-related skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is extremely comprehensive but violates token efficiency principles by including extensive explanations of concepts Claude already knows (OWASP, SOLID, basic security vulnerabilities). The content would benefit from aggressive trimming to essential project-specific configurations and being split into multiple focused files with clear navigation.
Suggestions
Remove explanatory content Claude already knows (OWASP descriptions, SOLID definitions, basic vulnerability explanations) and keep only project-specific rules and configurations
Split into separate files: SECURITY.md, PERFORMANCE.md, CI-CD.md, with SKILL.md as a concise overview pointing to each
Add explicit validation checkpoints in workflows (e.g., 'Verify static analysis completed before AI review', 'Check API rate limits before batch processing')
Make code examples complete and executable by including all imports and defining helper functions, or explicitly mark as pseudocode requiring adaptation
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at 400+ lines with extensive explanations Claude already knows (OWASP Top 10 descriptions, SOLID principles definitions, basic concepts like what SQL injection is). Contains redundant code examples and explanatory text that doesn't add actionable value. | 1 / 3 |
Actionability | Contains concrete code examples that are mostly executable, but many are incomplete (missing imports, undefined functions like `find_loops`, `detectsSharedDatabase`). The examples demonstrate concepts rather than providing copy-paste ready solutions for actual use. | 2 / 3 |
Workflow Clarity | Has a structured workflow (Initial Triage → Static Analysis → AI Review → Routing) but lacks explicit validation checkpoints and error recovery steps. The CI/CD example has a quality gate but no guidance on what to do when issues are found beyond failing the build. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with everything inline. References `resources/implementation-playbook.md` but dumps all content in the main file anyway. No clear separation between quick-start and advanced content; architecture analysis, security detection, and performance review could all be separate files. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.