CtrlK
BlogDocsLog inGet started
Tessl Logo

review

Framework for code review that captures context future maintainers need—concerns raised, alternatives rejected, risks accepted. Use for PRs, local changes, or architecture review when the decision matters more than the diff. Produces structured feedback with must-address issues, suggestions, and observations "for the record."

77

Quality

71%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

85%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description that clearly articulates a distinctive niche—decision-context-preserving code review rather than generic code review. It effectively communicates both what the skill does and when to use it, with specific output categories. The main weakness is moderate trigger term coverage, missing some common user phrasings like 'pull request' or 'review my code'.

Suggestions

Add more natural trigger term variations such as 'pull request', 'review my code', 'merge request', or 'code changes' to improve discoverability when users use common phrasings.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: captures concerns raised, alternatives rejected, risks accepted; produces structured feedback with must-address issues, suggestions, and observations. These are concrete, actionable outputs.

3 / 3

Completeness

Clearly answers both what (captures context for future maintainers, produces structured feedback with categorized issues) and when ('Use for PRs, local changes, or architecture review when the decision matters more than the diff'). The 'Use for...' clause explicitly defines trigger scenarios.

3 / 3

Trigger Term Quality

Includes some natural keywords like 'code review', 'PRs', 'architecture review', and 'diff', but misses common variations users might say such as 'pull request', 'review my code', 'code feedback', or 'merge request'. The phrase 'decision matters more than the diff' is evocative but not a natural trigger term.

2 / 3

Distinctiveness Conflict Risk

The focus on decision-context preservation and structured feedback categories (must-address, suggestions, observations 'for the record') creates a clear niche distinct from generic code review or linting skills. The emphasis on capturing rationale rather than just finding bugs is highly distinctive.

3 / 3

Total

11

/

12

Passed

Implementation

57%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid instructional skill that provides a clear review framework with a useful output template and good cross-references. Its main weaknesses are moderate verbosity (explaining concepts Claude already understands like what 'correctness' and 'maintainability' mean in code review) and some sections that describe rather than instruct. The workflow would benefit from explicit validation checkpoints to ensure review completeness.

Suggestions

Trim the evaluation dimensions (Correctness, Design, Maintainability, Risk) to just the non-obvious checklist items — Claude already knows what these concepts mean; focus on project-specific or easily-missed aspects.

Add a validation checkpoint before finalizing the review, e.g., 'Before posting: verify every Must Address item includes a concrete suggestion or alternative, and the Concerns for the Record section is non-empty.'

Remove or significantly condense the 'Output Quality' section — it restates principles already embedded in the template and workflow.

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some unnecessary explanatory content that Claude already knows (e.g., explaining what correctness, design, maintainability mean, the Bacchelli & Bird research citation, and the 'Output Quality' section which restates obvious review principles). The anti-patterns section, while useful, borders on teaching Claude things it already understands.

2 / 3

Actionability

The review template is concrete and copy-paste ready, and the GitHub CLI commands are executable. However, much of the guidance remains at the level of checklists and principles rather than specific executable steps. The 'Local Code Review' and 'Architecture Review' sections are vague lists rather than concrete procedures.

2 / 3

Workflow Clarity

The 4-step workflow is clearly sequenced and the review template provides good structure. However, there are no validation checkpoints or feedback loops — no step says 'verify your review covers all dimensions before posting' or 'if context is insufficient, loop back to step 1.' For a process that produces structured output, there's no verification that the output is complete.

2 / 3

Progressive Disclosure

The content is well-organized with clear sections, appropriate use of headers, and well-signaled cross-references to related skills (/naming, /adr, /prose, FRAMEWORKS.md, RECIPE.md). Content is appropriately scoped for a single SKILL.md file without being monolithic.

3 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
tslateman/duet
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.