Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation
73
58%
Does it follow best practices?
Impact
97%
1.34xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/receiving-code-review/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description effectively communicates when to use the skill (receiving code review feedback that may be unclear or questionable) but completely fails to describe what the skill actually does in concrete terms. It reads more like a philosophical stance or guiding principle than a skill description, using abstract concepts like 'technical rigor' and 'verification' without specifying actionable capabilities.
Suggestions
Add concrete actions describing what the skill does, e.g., 'Critically evaluates code review feedback for technical accuracy, verifies suggestions against codebase context, and formulates reasoned responses to reviewer comments.'
Expand trigger terms to include common variations like 'PR comments', 'pull request feedback', 'review comments', 'reviewer suggestions', 'code review response'.
Rewrite in third person with explicit capability verbs, e.g., 'Analyzes code review feedback for technical validity, cross-references suggestions with existing code patterns, and drafts informed responses. Use when...'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description does not list any concrete actions or capabilities. It describes a mindset ('technical rigor and verification') and anti-patterns ('not performative agreement or blind implementation') but never states what the skill actually does — no verbs like 'analyze', 'verify', 'compare', or 'respond'. | 1 / 3 |
Completeness | The 'when' is reasonably well-addressed with 'Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable.' However, the 'what' is essentially absent — it describes a philosophy (technical rigor, not blind agreement) rather than concrete actions the skill performs. | 2 / 3 |
Trigger Term Quality | It includes some relevant trigger terms like 'code review feedback' and 'implementing suggestions' that users might naturally mention. However, it misses common variations like 'PR comments', 'review comments', 'pull request feedback', 'code suggestions', or 'reviewer'. | 2 / 3 |
Distinctiveness Conflict Risk | The focus on code review feedback response gives it a somewhat specific niche, but the lack of concrete actions means it could overlap with general code review skills, code analysis skills, or communication skills. The philosophical framing ('technical rigor', 'not performative agreement') doesn't create clear boundaries. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong behavioral skill that provides genuinely useful, non-obvious guidance for handling code review feedback with technical rigor. Its main strengths are highly actionable response patterns with concrete good/bad examples and clear workflow sequencing. Its primary weakness is repetition — the 'no performative agreement' and 'clarify before implementing' themes are restated across multiple sections, inflating token cost without adding new information.
Suggestions
Consolidate the repeated 'no performative agreement' guidance into a single authoritative section and reference it briefly elsewhere, rather than restating examples in 3-4 separate sections.
Consider extracting the 'Real Examples' and 'Common Mistakes' sections into a separate EXAMPLES.md file, keeping only 1-2 key examples inline to reduce the monolithic length.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient and covers genuinely useful behavioral guidance Claude wouldn't inherently know (specific forbidden phrases, pushback protocols, source-specific handling). However, there's notable repetition — the 'no performative agreement' point is restated 5+ times across sections, the 'unclear feedback' example appears twice, and the 'Common Mistakes' table largely duplicates content already covered in detail above. | 2 / 3 |
Actionability | Highly actionable with concrete examples of good vs bad responses, specific pseudocode workflows, exact phrases to use and avoid, a prioritized implementation order, and even a specific GitHub API command for thread replies. The skill provides copy-paste-ready response patterns rather than abstract advice. | 3 / 3 |
Workflow Clarity | The multi-step response pattern is clearly sequenced (READ → UNDERSTAND → VERIFY → EVALUATE → RESPOND → IMPLEMENT), with explicit validation checkpoints ('clarify anything unclear FIRST'), feedback loops ('if you pushed back and were wrong'), and a clear implementation order prioritizing blocking issues. The 'stop and clarify before partial implementation' rule is an excellent validation checkpoint. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear headers and logical section progression, but it's a fairly long monolithic document (~180 lines) with no references to external files. Some sections like the detailed examples and common mistakes table could be split out. However, for a behavioral/process skill (vs a technical skill), inline content is more justifiable. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
b557648
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.