CtrlK
BlogDocsLog inGet started
Tessl Logo

code-review-excellence

Master effective code review practices to provide constructive feedback, catch bugs early, and foster knowledge sharing while maintaining team morale. Use when reviewing pull requests, establishing review standards, or mentoring developers.

65

1.28x
Quality

51%

Does it follow best practices?

Impact

86%

1.28x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./tests/ext_conformance/artifacts/agents-wshobson/developer-essentials/skills/code-review-excellence/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has a solid structure with an explicit 'Use when' clause that clearly communicates both purpose and triggers. However, the capabilities listed lean toward aspirational outcomes (catch bugs, foster knowledge sharing, maintain morale) rather than concrete actions the skill performs. The trigger terms are reasonable but could be more comprehensive to cover natural user language variations.

Suggestions

Replace outcome-oriented phrases like 'catch bugs early' and 'foster knowledge sharing' with concrete actions such as 'analyze diffs for common bug patterns, suggest inline comments, generate review checklists'.

Expand trigger terms to include common variations like 'PR review', 'code feedback', 'review comments', 'approve changes', or 'review checklist'.

DimensionReasoningScore

Specificity

Names the domain (code review) and some actions ('provide constructive feedback, catch bugs early, foster knowledge sharing'), but these are more like goals/outcomes than concrete specific actions. Compare to 'Extract text and tables from PDF files, fill forms, merge documents' which lists discrete operations.

2 / 3

Completeness

Clearly answers both 'what' (effective code review practices for constructive feedback, catching bugs, knowledge sharing) and 'when' with an explicit 'Use when reviewing pull requests, establishing review standards, or mentoring developers' clause.

3 / 3

Trigger Term Quality

Includes some natural keywords like 'pull requests', 'code review', 'review standards', and 'mentoring developers', but misses common variations users might say such as 'PR review', 'code feedback', 'review comments', 'approve PR', or 'review checklist'.

2 / 3

Distinctiveness Conflict Risk

While 'code review' and 'pull requests' provide some specificity, this could overlap with general coding skills, mentoring skills, or team collaboration skills. The phrase 'maintaining team morale' and 'mentoring developers' are broad enough to conflict with leadership or team management skills.

2 / 3

Total

9

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive but severely over-long, explaining many concepts Claude already understands (soft skills, basic anti-patterns, general review philosophy). The content reads more like a human training document than a skill file for an AI assistant. While it contains useful checklists and code examples, the signal-to-noise ratio is low, and the bulk of the content should either be condensed or moved to referenced sub-files.

Suggestions

Reduce the main file to ~80 lines focusing on: the 4-phase review process, severity labels, and the PR review template. Move language-specific patterns, security checklists, and soft-skills guidance into referenced sub-files.

Remove sections that explain concepts Claude already knows: 'The Review Mindset' goals, 'Effective Feedback' principles, 'Handling Disagreements', and 'Common Pitfalls' are general knowledge that waste tokens.

Add explicit validation steps to the workflow, e.g., 'After Phase 3, verify all blocking issues have specific fix suggestions before posting the review' to create a proper feedback loop.

Reframe the skill as instructions for Claude specifically — what Claude should do when asked to review code — rather than general advice for human reviewers.

DimensionReasoningScore

Conciseness

Extremely verbose at ~400+ lines. Explains concepts Claude already knows well (what code review is, what good feedback looks like, basic Python/TypeScript anti-patterns). Extensive sections on soft skills, mindset, and common pitfalls that are general knowledge for an LLM. Much of this could be condensed to 20% of its current size.

1 / 3

Actionability

Contains concrete code examples for language-specific patterns (Python mutable defaults, TypeScript error handling) and usable templates/checklists. However, much of the content is advisory rather than executable — it describes how humans should behave in reviews rather than giving Claude specific instructions on what to do when asked to review code. The checklists are actionable but the surrounding prose is descriptive.

2 / 3

Workflow Clarity

The four-phase review process (Context Gathering → High-Level → Line-by-Line → Summary) provides a clear sequence, but lacks validation checkpoints or feedback loops. There's no explicit step for verifying findings or handling cases where the review process itself needs adjustment. The phases are described but not tightly sequenced with concrete outputs at each step.

2 / 3

Progressive Disclosure

References external files at the end (references/code-review-best-practices.md, scripts/pr-analyzer.py, etc.), which is good. However, the main file is a monolithic wall of content that should have much of its detail pushed into those referenced files. The language-specific patterns, advanced review patterns, and security checklists could all be separate files referenced from a leaner overview.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (539 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
Dicklesworthstone/pi_agent_rust
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.