CtrlK
BlogDocsLog inGet started
Tessl Logo

review-code

Multi-dimensional code review with structured reports. Analyzes correctness, readability, performance, security, testing, and architecture. Triggers on "review code", "code review", "审查代码", "代码审查".

73

Quality

67%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.claude/skills/review-code/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong, well-crafted description that concisely communicates what the skill does (multi-dimensional code review producing structured reports across six named dimensions) and when to use it (explicit trigger phrases in both English and Chinese). It uses third-person voice correctly and avoids vague language or buzzwords.

DimensionReasoningScore

Specificity

Lists multiple specific concrete dimensions of analysis: correctness, readability, performance, security, testing, and architecture. Also mentions 'structured reports' as a concrete output format.

3 / 3

Completeness

Clearly answers 'what' (multi-dimensional code review with structured reports analyzing six dimensions) and 'when' (explicit triggers on specific phrases). The 'Triggers on' clause serves as an explicit 'Use when' equivalent.

3 / 3

Trigger Term Quality

Includes natural English trigger terms ('review code', 'code review') and Chinese equivalents ('审查代码', '代码审查'), covering common variations users would naturally say.

3 / 3

Distinctiveness Conflict Risk

The description carves out a clear niche: multi-dimensional structured code review. The specific trigger terms and the emphasis on structured reports with named dimensions make it unlikely to conflict with general coding or documentation skills.

3 / 3

Total

12

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a well-structured framework for multi-dimensional code review but suffers from significant verbosity and redundancy—the same information (phases, dimensions, references) is repeated in multiple formats (ASCII diagrams, tables, flow charts). The actual actionable content is almost entirely delegated to external files that aren't provided, leaving the SKILL.md as an over-elaborate table of contents. The bilingual approach (Chinese + English) doubles token usage without clear benefit.

Suggestions

Eliminate redundant representations: choose ONE format for the execution flow (either the ASCII diagram OR the text flow chart, not both) and remove the duplicate reference tables.

Inline the essential review logic from the referenced spec files (review-dimensions.md, issue-classification.md) directly into the SKILL.md as concise checklists, since the bundle files don't exist to support the references.

Remove explanations of concepts Claude already knows (what code review dimensions mean, what severity levels are) and replace with only the project-specific standards or thresholds that differ from common practice.

Add a concrete example of a review finding (input code snippet → identified issue → structured output) to make the skill actionable without requiring external files.

DimensionReasoningScore

Conciseness

Extremely verbose with significant redundancy. The architecture diagram, execution flow, and mandatory prerequisites all repeat the same phase information. The ASCII art diagrams, bilingual text (Chinese + English), and multiple tables conveying overlapping information (review dimensions, issue severity) bloat the content significantly. Much of this (what code review dimensions are, severity levels) is knowledge Claude already has.

1 / 3

Actionability

The directory setup provides executable JavaScript/Bash code, and the execution flow describes concrete phases. However, the actual review logic is entirely delegated to referenced files (specs/, phases/, templates/) that are not provided. The skill itself contains no executable review code or concrete examples of how to analyze code or produce findings—it's mostly structural scaffolding pointing elsewhere.

2 / 3

Workflow Clarity

The execution flow is clearly sequenced with named phases (collect-context → quick-scan → deep-review → generate-report → complete), and there's a mandatory prerequisite gate. However, there are no validation checkpoints or feedback loops between steps—no guidance on what to do if the quick scan finds nothing, if a dimension review fails, or how to verify report quality before completing.

2 / 3

Progressive Disclosure

The skill references many external files (specs/, phases/, templates/) with clear purpose descriptions, which is good progressive disclosure structure. However, no bundle files are provided, making all references unverifiable. The main SKILL.md itself is also too heavy—it includes redundant architecture diagrams and tables that should either be in the referenced files or significantly condensed. The reference documents table at the end largely duplicates the mandatory prerequisites table.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

Total

10

/

11

Passed

Repository
catlog22/Claude-Code-Workflow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.