CtrlK
BlogDocsLog inGet started
Tessl Logo

receiving-code-review

Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation

73

1.34x
Quality

58%

Does it follow best practices?

Impact

97%

1.34x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.opencode/skills/receiving-code-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has a clear 'when' clause which is its strongest element, but it fundamentally fails to describe what the skill actually does — it only describes a philosophy (technical rigor over blind agreement). The lack of concrete actions makes it difficult for Claude to know what this skill will help it accomplish, and the description reads more like a principle than a capability.

Suggestions

Add concrete actions describing what the skill does, e.g., 'Critically evaluates code review feedback for technical accuracy, verifies suggestions against codebase context, and formulates reasoned responses before implementing changes.'

Expand trigger terms to include common variations like 'PR comments', 'pull request feedback', 'reviewer suggestions', 'review comments', 'suggested changes'.

Reframe the description to lead with capabilities rather than anti-patterns — state what it does positively before noting what it avoids.

DimensionReasoningScore

Specificity

The description does not list any concrete actions. It describes a mindset ('technical rigor and verification') and anti-patterns ('not performative agreement or blind implementation') but never states what the skill actually does — no verbs like 'analyze', 'verify', 'compare', or 'respond'.

1 / 3

Completeness

The 'when' is explicitly addressed ('Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable'). However, the 'what' is essentially absent — it describes what NOT to do but never states what the skill actually does or produces.

2 / 3

Trigger Term Quality

It includes some relevant trigger terms like 'code review feedback', 'implementing suggestions', and 'technically questionable', which a user might naturally mention. However, it misses common variations like 'PR comments', 'review comments', 'pull request', 'suggested changes', or 'reviewer feedback'.

2 / 3

Distinctiveness Conflict Risk

The focus on code review feedback response gives it a somewhat specific niche, but the lack of concrete actions makes it potentially overlap with general code review skills, code analysis skills, or communication/response skills.

2 / 3

Total

7

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong behavioral skill with excellent actionability and workflow clarity. It provides specific, concrete guidance for handling code review feedback with clear decision trees, good/bad examples, and explicit validation steps. The main weaknesses are moderate repetition across sections (the 'no performative agreement' theme is restated 4-5 times) and the content being somewhat long for a single file without progressive disclosure to supplementary materials.

Suggestions

Consolidate the repeated 'no performative agreement' guidance into a single authoritative section and reference it briefly elsewhere, reducing redundancy across Forbidden Responses, Acknowledging Correct Feedback, Common Mistakes, and Real Examples.

The 'Why no thanks' and 'If you catch yourself about to write Thanks' subsections over-explain a simple rule — condense to a single line like 'No gratitude expressions. State the fix instead.'

DimensionReasoningScore

Conciseness

The skill is mostly efficient and covers genuinely useful behavioral guidance Claude wouldn't inherently know (specific response patterns, forbidden phrases, pushback protocols). However, it's somewhat repetitive — the 'no performative agreement' point is hammered across multiple sections (Forbidden Responses, Acknowledging Correct Feedback, Common Mistakes, Real Examples), and the 'no thanks' section over-explains. Some trimming would improve token efficiency.

2 / 3

Actionability

The skill provides highly concrete, specific guidance: exact phrases to use and avoid, decision trees with clear conditionals, specific examples of good vs bad responses, a prioritized implementation order, and even a specific GitHub API endpoint for thread replies. The pseudocode-style decision flows are appropriate for behavioral (non-code) skills and are fully actionable.

3 / 3

Workflow Clarity

The multi-step workflow is clearly sequenced (READ → UNDERSTAND → VERIFY → EVALUATE → RESPOND → IMPLEMENT) with explicit validation checkpoints ('clarify anything unclear FIRST', 'test each fix individually', 'verify no regressions'). The handling of unclear feedback includes a clear feedback loop (stop, ask, then proceed). The implementation order section provides explicit prioritization with testing at each step.

3 / 3

Progressive Disclosure

The content is well-structured with clear headers and logical sections, but it's a fairly long single file (~180 lines of content) with no references to external files for deeper topics. The source-specific handling, YAGNI checks, and real examples sections could potentially be split out. However, for a behavioral skill of this nature, keeping everything in one file is somewhat defensible.

2 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
projectbluefin/dakota
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.