CtrlK
BlogDocsLog inGet started
Tessl Logo

jbvc/code-reviewer

Elite code review expert specializing in modern AI-powered code analysis, security vulnerabilities, performance optimization, and production reliability. Masters static analysis tools, security scanning, and configuration review with 2024/2025 best practices. Use PROACTIVELY for code quality assurance.

41

Quality

41%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

42%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description suffers from buzzword-heavy language ('Elite', 'Masters', '2024/2025 best practices') that adds no discriminative value. While it names several relevant domains, it lacks concrete actions and specific trigger scenarios that would help Claude reliably select this skill over others. The overly broad scope increases conflict risk with more specialized skills.

Suggestions

Replace vague fluff ('Elite', 'Masters', '2024/2025 best practices') with specific concrete actions like 'Identifies security vulnerabilities (SQL injection, XSS), flags performance bottlenecks, checks error handling patterns, and reviews configuration files for misconfigurations.'

Add an explicit 'Use when...' clause with natural trigger terms: 'Use when the user asks for a code review, PR review, pull request feedback, security audit, or wants to check code quality before merging.'

Narrow the scope or clearly delineate boundaries to reduce conflict risk — e.g., specify whether this is for reviewing existing code vs. writing new code, and which languages/frameworks it targets.

DimensionReasoningScore

Specificity

Names the domain (code review) and some areas (security vulnerabilities, performance optimization, production reliability, static analysis tools, security scanning, configuration review), but these are broad categories rather than concrete actions. No specific verbs like 'detects X', 'flags Y', or 'generates Z reports'.

2 / 3

Completeness

The 'what' is partially addressed (code review, security, performance), but the 'when' clause ('Use PROACTIVELY for code quality assurance') is extremely vague and unhelpful — it doesn't specify concrete trigger scenarios like 'when the user asks for a code review' or 'when reviewing pull requests'. This caps completeness at 2.

2 / 3

Trigger Term Quality

Includes some relevant keywords like 'code review', 'security vulnerabilities', 'performance optimization', 'static analysis', and 'security scanning'. However, it misses natural user phrases like 'review my code', 'PR review', 'pull request', 'code quality', 'lint', 'bugs', or file type triggers. 'Elite' and 'Masters' are fluff, not trigger terms.

2 / 3

Distinctiveness Conflict Risk

The description is very broad, covering code review, security, performance, and reliability — areas that could easily overlap with dedicated security scanning skills, performance profiling skills, or general coding assistance skills. The lack of a clear niche makes conflict with other skills highly likely.

1 / 3

Total

7

/

12

Passed

Implementation

7%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a persona description or capability catalog rather than an actionable skill file. It is overwhelmingly verbose, listing dozens of tools, techniques, and domains without providing any concrete code examples, specific commands, output formats, or validation steps. The content would need a fundamental restructuring to be useful as a skill—replacing the extensive capability lists with concise, executable review workflows and concrete examples.

Suggestions

Replace the extensive capability lists with 2-3 concrete code review workflow examples showing specific commands (e.g., running semgrep, interpreting output, generating review comments) with expected input/output.

Add a structured review checklist with specific validation steps and feedback loops, e.g., 'Run `semgrep --config auto .` → if findings > 0, categorize by severity → for each critical finding, provide fix with code example'.

Remove the 'Capabilities', 'Behavioral Traits', 'Knowledge Base', and 'Example Interactions' sections entirely—these describe what Claude already knows and waste token budget. Focus on project-specific patterns, tool configurations, and output formats.

Define a concrete output format for code review results (e.g., a structured markdown template with severity levels, file locations, suggested fixes) so Claude knows exactly what to produce.

DimensionReasoningScore

Conciseness

Extremely verbose and padded with extensive lists of capabilities, tools, and concepts that Claude already knows. The bulk of the content is taxonomic listing of domains (security, performance, configuration, etc.) rather than actionable instructions. Most of this is general knowledge Claude possesses without needing it spelled out.

1 / 3

Actionability

No concrete code examples, no executable commands, no specific tool invocations, and no copy-paste ready guidance. The content describes capabilities and behavioral traits abstractly but never shows Claude how to actually perform a code review with specific steps, commands, or output formats.

1 / 3

Workflow Clarity

The 'Response Approach' section lists 10 high-level steps but they are vague and lack validation checkpoints. There are no feedback loops, no error recovery steps, and no concrete verification mechanisms. For a skill involving security scanning and destructive/batch operations, this is insufficient.

1 / 3

Progressive Disclosure

There is a reference to 'resources/implementation-playbook.md' for detailed examples, which shows some awareness of progressive disclosure. However, the main file is a monolithic wall of categorized lists that could be significantly restructured, and only one external reference is provided with no clear navigation to specific topics.

2 / 3

Total

5

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents