Automated code review for pull requests using multiple specialized agents
66
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies its domain (automated PR code review) and hints at a multi-agent approach, but lacks concrete action details and critically omits any 'Use when...' guidance. Without explicit trigger conditions, Claude cannot reliably distinguish when to select this skill over other code-related skills.
Suggestions
Add a 'Use when...' clause with trigger terms like 'review my PR', 'check this pull request', 'code review', 'review changes before merge'
List specific concrete actions the agents perform, e.g., 'checks for security vulnerabilities, analyzes code style, identifies bugs, suggests improvements'
Include common term variations like 'PR', 'merge request', 'diff review' to improve trigger matching
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (code review, pull requests) and mentions 'multiple specialized agents' as a method, but doesn't list concrete actions like 'check for security issues, analyze code style, suggest improvements'. | 2 / 3 |
Completeness | Describes what it does (automated code review) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Includes 'code review' and 'pull requests' which are natural terms, but misses common variations like 'PR', 'review my code', 'check this PR', or 'merge request'. | 2 / 3 |
Distinctiveness Conflict Risk | The 'pull requests' and 'multiple specialized agents' aspects provide some distinction, but 'code review' is broad enough to potentially conflict with general code analysis or linting skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, highly actionable skill for automated code review with clear multi-agent orchestration and validation workflows. The main weaknesses are moderate verbosity with repeated concepts (high signal criteria appear multiple times) and a monolithic structure that could benefit from extracting reference material into separate files.
Suggestions
Extract the false positive list and link formatting rules into separate reference files (e.g., FALSE_POSITIVES.md, LINK_FORMAT.md) to reduce main skill length
Consolidate the 'high signal' criteria into a single location rather than repeating across steps 4 and 5
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some redundancy (e.g., repeating 'high signal' criteria multiple times, explaining what false positives are). Some sections could be tightened without losing clarity. | 2 / 3 |
Actionability | Provides highly specific, executable guidance with exact CLI commands (gh pr view, gh pr comment, gh pr review), precise agent types to use (haiku, sonnet, opus), and concrete examples of link formatting and comment structure. | 3 / 3 |
Workflow Clarity | Clear 9-step sequential workflow with explicit validation checkpoints (step 5 validates issues from steps 3-4, step 6 filters), parallel execution points clearly marked, and conditional branching (step 7 vs 8). Includes feedback loop for issue validation. | 3 / 3 |
Progressive Disclosure | Content is well-structured with numbered steps and clear sections, but it's a monolithic document that could benefit from splitting detailed criteria (false positive list, link formatting rules) into separate reference files for cleaner navigation. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.