Master effective code review practices to provide constructive feedback, catch bugs early, and foster knowledge sharing while maintaining team morale. Use when reviewing pull requests, establishing review standards, or mentoring developers.
Install with Tessl CLI
npx tessl i github:wshobson/agents --skill code-review-excellence71
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillAgent success when using this skill
Validation for skill structure
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has good structure with explicit 'Use when' guidance and covers the domain adequately. However, it leans toward abstract outcomes ('foster knowledge sharing', 'maintaining team morale') rather than concrete technical actions, and could benefit from more natural trigger term variations that users commonly use.
Suggestions
Replace abstract outcomes with concrete actions like 'identify bugs, suggest refactoring, check for security issues, verify test coverage'
Add common trigger term variations such as 'PR', 'merge request', 'review my code', 'code feedback', 'approve changes'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (code review) and mentions some actions like 'provide constructive feedback, catch bugs early, foster knowledge sharing' but these are somewhat abstract outcomes rather than concrete actions like 'comment on pull requests' or 'identify security vulnerabilities'. | 2 / 3 |
Completeness | Clearly answers both what ('provide constructive feedback, catch bugs early, foster knowledge sharing, maintain team morale') and when ('Use when reviewing pull requests, establishing review standards, or mentoring developers') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes some natural terms like 'pull requests', 'code review', and 'mentoring developers', but misses common variations users might say such as 'PR review', 'review my code', 'code feedback', or 'merge request'. | 2 / 3 |
Distinctiveness Conflict Risk | While 'code review' and 'pull requests' provide some specificity, terms like 'mentoring developers' and 'knowledge sharing' are broad enough to potentially overlap with general coding skills or team collaboration skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides highly actionable and well-structured guidance for code reviews with excellent workflow clarity and concrete examples. However, it is severely bloated with content Claude already knows (basic programming patterns, what good feedback looks like, common bugs) and would benefit from aggressive trimming to ~100 lines with references to detailed materials. The skill tries to be a comprehensive code review textbook rather than a focused instruction set.
Suggestions
Reduce content by 70-80% by removing explanations of concepts Claude already knows (e.g., what code review goals are, basic Python/TypeScript patterns, general feedback principles)
Move language-specific patterns, security checklists, and templates to separate reference files and link to them from a concise overview
Keep only the 4-phase review process, severity labels, and one or two key examples - Claude can generate the rest on demand
Remove the 'Common Pitfalls' and 'Best Practices' sections which are general knowledge Claude already possesses
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~500+ lines. Explains concepts Claude already knows (what code review is, basic feedback principles, common programming patterns). Extensive examples of good/bad code patterns that Claude can generate on demand. Much of this content is general knowledge padding. | 1 / 3 |
Actionability | Provides concrete, executable guidance with specific code examples in Python and TypeScript, clear checklists, labeled severity markers, and copy-paste ready templates. The feedback examples and review templates are immediately usable. | 3 / 3 |
Workflow Clarity | Clear 4-phase review process with time estimates, explicit sequencing (Context → High-Level → Line-by-Line → Summary), and decision checkpoints. The workflow is well-structured with validation steps (CI/CD status check, severity labeling, clear approve/request changes decision). | 3 / 3 |
Progressive Disclosure | References external files (references/code-review-best-practices.md, scripts/pr-analyzer.py) but the main file is a monolithic wall of text. Content that could be in separate files (language-specific patterns, security checklist, templates) is inline, making the skill overwhelming. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (539 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.