CtrlK
BlogDocsLog inGet started
Tessl Logo

review-code

Multi-dimensional code review with structured reports. Analyzes correctness, readability, performance, security, testing, and architecture. Triggers on "review code", "code review", "审查代码", "代码审查".

73

Quality

67%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.claude/skills/review-code/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong, well-crafted description that concisely communicates what the skill does (multi-dimensional code review across six specific dimensions with structured reports) and when to use it (explicit trigger terms in both English and Chinese). It uses third-person voice appropriately and avoids vague language or buzzwords.

DimensionReasoningScore

Specificity

Lists multiple specific concrete dimensions of analysis: correctness, readability, performance, security, testing, and architecture. Also mentions 'structured reports' as a concrete output format.

3 / 3

Completeness

Clearly answers 'what' (multi-dimensional code review with structured reports analyzing six dimensions) and 'when' (explicit triggers on specific phrases). The 'Triggers on' clause serves as an explicit 'Use when' equivalent.

3 / 3

Trigger Term Quality

Includes natural English trigger terms ('review code', 'code review') and Chinese equivalents ('审查代码', '代码审查'), covering common variations users would naturally say.

3 / 3

Distinctiveness Conflict Risk

The description carves out a clear niche: multi-dimensional code review with structured reports. The specific trigger terms ('review code', 'code review') and the enumerated dimensions make it distinct from general coding assistance or single-purpose analysis skills.

3 / 3

Total

12

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is overly verbose and repetitive, presenting the same workflow information in multiple formats (architecture diagram, execution flow, tables) without adding incremental value. While the structure and progressive disclosure to external files is reasonable, the SKILL.md itself contains too much redundant scaffolding and too little actionable, executable guidance. The actual review logic is entirely deferred to 12+ external documents, making this more of an index page than a functional skill.

Suggestions

Eliminate redundant representations of the workflow — pick either the architecture diagram OR the execution flow block, not both, and remove the duplicate phase descriptions.

Remove inline definitions of review dimensions and severity levels that Claude already understands, or keep them to a minimal one-line-per-item reference table.

Add at least one concrete example of a review finding (input code snippet → identified issue → structured output) so Claude knows exactly what format to produce.

Add a validation checkpoint after report generation (e.g., verify all dimensions are covered, verify severity counts match findings) to create a feedback loop for quality assurance.

DimensionReasoningScore

Conciseness

Extremely verbose with redundant information. The architecture diagram, execution flow, and mandatory prerequisites all repeat the same phase sequence. Review dimensions and severity levels are described both inline and referenced to external specs. The bilingual (Chinese/English) approach doubles many labels without adding value for Claude. Much of this content (what code review dimensions are, what severity levels mean) is knowledge Claude already has.

1 / 3

Actionability

The directory setup has executable JavaScript/Bash code, and the execution flow describes concrete phases. However, the actual review logic is entirely deferred to external reference documents (specs/, phases/, templates/). The skill itself contains no executable review code, no example of how to analyze a file, and no concrete example of a finding or report output.

2 / 3

Workflow Clarity

The multi-step process is clearly sequenced (Phase 0 → collect-context → quick-scan → deep-review → generate-report → complete), but there are no validation checkpoints or feedback loops. For a review process that generates structured reports, there's no verification step to ensure findings are complete or reports are well-formed. The actual implementation details are all in external files.

2 / 3

Progressive Disclosure

References to external files are well-organized in tables with clear purposes, which is good. However, the SKILL.md itself is bloated with content that should either be in the referenced files or omitted entirely (e.g., the full architecture diagram, the repeated execution flow). The reference documents section lists 12 external files, some of which are 2 levels deep (phases/actions/), making navigation complex.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

Total

10

/

11

Passed

Repository
catlog22/Claude-Code-Workflow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.