CtrlK
BlogDocsLog inGet started
Tessl Logo

code-review-ai-ai-review

You are an expert AI-powered code review specialist combining automated static analysis, intelligent pattern recognition, and modern DevOps practices. Leverage AI tools (GitHub Copilot, Qodo, GPT-5, C

36

Quality

21%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/code-review-ai-ai-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

14%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is truncated mid-sentence, rendering it incomplete and unusable for skill selection. Even the visible portion relies on vague buzzwords ('intelligent pattern recognition', 'modern DevOps practices') and tool name-dropping rather than describing concrete actions. It lacks any 'Use when...' clause and would likely conflict with many other development-related skills.

Suggestions

Complete the truncated description and add a clear 'Use when...' clause with specific triggers like 'review my code', 'pull request review', 'find bugs', 'code quality check'.

Replace abstract buzzwords with concrete actions such as 'Reviews pull requests for bugs, security vulnerabilities, and style issues; suggests code improvements; checks for common anti-patterns'.

Narrow the scope to distinguish from other development skills - specify whether this is for PR reviews, static analysis, security audits, or general code quality, rather than claiming all of them.

DimensionReasoningScore

Specificity

The description mentions 'automated static analysis, intelligent pattern recognition, and modern DevOps practices' but these are abstract buzzwords rather than concrete actions. No specific actions like 'review pull requests', 'detect bugs', or 'suggest fixes' are listed.

1 / 3

Completeness

The description is truncated and incomplete. It partially addresses 'what' with vague capability claims but completely lacks any 'when should Claude use it' guidance. There is no 'Use when...' clause or equivalent trigger guidance.

1 / 3

Trigger Term Quality

Contains some relevant keywords like 'code review', 'static analysis', 'GitHub Copilot', and 'DevOps', but the description appears truncated and relies heavily on tool names and jargon rather than natural user phrases like 'review my code' or 'check for bugs'.

2 / 3

Distinctiveness Conflict Risk

The description is extremely broad, covering 'code review', 'static analysis', 'pattern recognition', and 'DevOps practices', which could overlap with many other coding, testing, CI/CD, or development skills. The truncation makes it even harder to distinguish.

1 / 3

Total

5

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a massive, verbose document that tries to be a comprehensive reference manual rather than a focused, actionable skill. It explains many concepts Claude already knows (OWASP Top 10, SOLID principles, common anti-patterns), includes semi-functional code examples with undefined methods, and fails to leverage progressive disclosure by inlining everything. The content would benefit enormously from being split into focused sub-files with the main SKILL.md serving as a concise overview.

Suggestions

Reduce the main SKILL.md to ~50-80 lines covering the core workflow and quick-start, moving detailed sections (security detection, performance review, architecture analysis, CI/CD integration) into separate referenced files like `resources/security-review.md`, `resources/performance-review.md`, etc.

Remove explanations of concepts Claude already knows: OWASP Top 10 descriptions, SOLID principle definitions, what N+1 queries are, and common anti-pattern descriptions. Instead, just reference them as checklist items.

Make code examples truly executable by implementing missing methods (e.g., `get_pr_diff()`, `to_github_comment()`, `detectsSharedDatabase()`) or remove incomplete implementations in favor of focused, working snippets.

Add explicit validation checkpoints and error recovery steps to the workflow, such as what to do when static analysis tools fail, how to handle AI false positives, and how to escalate when automated review is insufficient.

DimensionReasoningScore

Conciseness

Extremely verbose at ~400+ lines. Explains concepts Claude already knows (OWASP Top 10 list, SOLID principles, what N+1 queries are). Includes massive code blocks that are illustrative rather than executable, and much content that could be in referenced files. The summary section restates what was already covered.

1 / 3

Actionability

Contains concrete code examples in multiple languages (Python, TypeScript, Go, YAML, bash), but many are pseudo-implementations with undefined methods (e.g., `get_pr_diff()`, `to_github_comment()`, `detectsSharedDatabase()`). The GitHub Actions workflow is relatively complete, but the orchestrator class is incomplete and not truly executable.

2 / 3

Workflow Clarity

The 'Automated Code Review Workflow' section provides a clear sequence (triage → static analysis → AI review → routing), but lacks explicit validation checkpoints and error recovery steps. There's no feedback loop for when AI review produces false positives or when static analysis tools fail. The CI/CD section has a quality gate but no guidance on handling failures beyond exit 1.

2 / 3

Progressive Disclosure

Monolithic wall of content with everything inlined. References `resources/implementation-playbook.md` once but dumps hundreds of lines of architecture analysis, security detection, performance review, CI/CD integration, and a complete example all in the main file. This content desperately needs to be split into separate referenced files.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
sickn33/antigravity-awesome-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.