tessl i github:sickn33/antigravity-awesome-skills --skill code-review-ai-ai-reviewYou are an expert AI-powered code review specialist combining automated static analysis, intelligent pattern recognition, and modern DevOps practices. Leverage AI tools (GitHub Copilot, Qodo, GPT-5, C
Activation
0%This description is severely incomplete (appears truncated mid-sentence) and fails on all dimensions. It relies on buzzwords and tool names rather than concrete capabilities, lacks any 'Use when...' clause, and provides no natural trigger terms that would help Claude select this skill appropriately.
Suggestions
Complete the truncated description and add a clear 'Use when...' clause with trigger terms like 'review my code', 'PR review', 'check for bugs', 'code quality'
Replace vague phrases like 'intelligent pattern recognition' with specific actions such as 'identifies security vulnerabilities, suggests refactoring opportunities, checks coding standards compliance'
Remove or minimize tool name-dropping (GitHub Copilot, GPT-5) and focus on what the skill actually accomplishes for the user
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague, abstract language like 'expert AI-powered code review specialist' and 'intelligent pattern recognition' without listing any concrete actions. It mentions tool names but not what the skill actually does. | 1 / 3 |
Completeness | The description appears truncated and incomplete. It fails to answer 'what does this do' with concrete actions and completely lacks any 'when to use' guidance or trigger conditions. | 1 / 3 |
Trigger Term Quality | Contains technical jargon ('static analysis', 'DevOps practices') and tool names (GitHub Copilot, Qodo, GPT-5) but lacks natural keywords users would say like 'review my code', 'check for bugs', or 'PR review'. | 1 / 3 |
Distinctiveness Conflict Risk | Very generic 'code review' framing could conflict with many other coding-related skills. No clear niche or distinct triggers are established to differentiate it. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
50%This skill provides highly actionable, executable code examples across multiple languages and tools, demonstrating strong technical depth. However, it severely violates token efficiency by explaining concepts Claude already knows (security fundamentals, what tools do, basic DevOps concepts) and includes excessive redundant examples. The content would benefit from aggressive trimming and splitting into focused reference files.
Suggestions
Remove explanatory content about well-known concepts (OWASP descriptions, what SonarQube/CodeQL do, basic security principles) - Claude knows these
Split into focused files: move CI/CD examples to CI_CD.md, the complete orchestrator to ORCHESTRATOR.md, and security patterns to SECURITY.md
Add explicit validation checkpoints: what to do when static analysis fails, how to verify AI review accuracy, error recovery for API failures
Reduce code examples to one canonical implementation rather than showing similar patterns in Python, TypeScript, Go, and JavaScript
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at 400+ lines with extensive explanations Claude already knows (what OWASP Top 10 is, basic security concepts, how CI/CD works). Contains redundant examples across multiple languages showing the same patterns, and includes marketing-style summaries that add no instructional value. | 1 / 3 |
Actionability | Provides fully executable code examples in Python, TypeScript, Go, JavaScript, YAML, and bash. Code snippets are copy-paste ready with concrete implementations for review orchestration, CI/CD integration, and analysis workflows. | 3 / 3 |
Workflow Clarity | Contains clear workflow sections (Initial Triage, Multi-Tool Static Analysis, AI-Assisted Review) but lacks explicit validation checkpoints and error recovery steps. The 'run_static_analysis' function has no error handling guidance, and there's no feedback loop for when AI review produces incorrect results. | 2 / 3 |
Progressive Disclosure | References 'resources/implementation-playbook.md' for detailed examples but the main file is a monolithic wall of content that should be split. The OWASP section, CI/CD examples, and complete orchestrator code could each be separate reference files. Structure exists but content is not appropriately distributed. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
81%| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
Total | 13 / 16 Passed | |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.