CtrlK
BlogDocsLog inGet started
Tessl Logo

code-review

Orchestrates plan-alignment and quality reviews using persistent or ad-hoc reviewer teammates. Use when verifying implementation matches requirements, at batch review checkpoints, before merging to main, after completing a major feature, before refactoring, after fixing a complex bug, or when a fresh perspective is needed. Spawns specialist reviewers (spec, quality, security, architecture) in parallel and consolidates findings.

80

Quality

74%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/code-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

92%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description that clearly articulates what the skill does and when to use it. It excels in completeness with an explicit and detailed 'Use when...' clause covering multiple scenarios, and provides good specificity by naming the types of specialist reviewers. The main weakness is moderate overlap risk with other review-oriented skills, though the multi-agent orchestration framing helps differentiate it.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: orchestrating plan-alignment reviews, quality reviews, spawning specialist reviewers (spec, quality, security, architecture) in parallel, and consolidating findings. These are concrete, well-defined capabilities.

3 / 3

Completeness

Clearly answers both 'what' (orchestrates plan-alignment and quality reviews using reviewer teammates, spawns specialist reviewers in parallel, consolidates findings) and 'when' (explicit 'Use when...' clause listing seven specific trigger scenarios like verifying implementation, batch review checkpoints, before merging, etc.).

3 / 3

Trigger Term Quality

Includes strong natural trigger terms users would say: 'review', 'merging to main', 'refactoring', 'bug', 'requirements', 'quality', 'security', 'architecture', 'fresh perspective'. These cover a good range of natural language a user might use when requesting code review.

3 / 3

Distinctiveness Conflict Risk

While the multi-agent orchestration aspect and specialist reviewer spawning are distinctive, terms like 'quality reviews', 'security', and 'before merging' could overlap with simpler code review skills, linting skills, or security scanning skills. The 'persistent or ad-hoc reviewer teammates' framing helps but the broad trigger scenarios increase conflict risk.

2 / 3

Total

11

/

12

Passed

Implementation

57%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a solid organizational framework for code review orchestration with good progressive disclosure and clear mode separation. Its main weaknesses are insufficient actionability — lacking concrete tool invocations, message templates, or TaskList schemas — and missing validation/re-review checkpoints in the workflow. The content could also be tightened by removing sections that state obvious best practices.

Suggestions

Add concrete examples of SendMessage and TaskList tool invocations with actual payloads/schemas, especially for the 'Within an Existing Team' mode where these are core actions.

Add an explicit re-review validation step after fixes (e.g., 'After fixing Critical/Important issues, re-request review from the same reviewer to confirm resolution') to close the feedback loop.

Trim or remove the 'Red Flags' and 'Persistent Reviewer Benefits' sections — these describe general best practices Claude can infer, and the token budget would be better spent on concrete examples.

DimensionReasoningScore

Conciseness

Generally efficient but includes some sections that could be tightened — e.g., the 'Persistent Reviewer Benefits' and 'Red Flags' sections explain concepts Claude can infer, and the 'When to Request Review' section is somewhat obvious. However, most content is reasonably lean.

2 / 3

Actionability

Provides concrete git commands for diffs and a clear workflow structure, but key steps like 'Create review team via kit:team-orchestration' and 'Spawn reviewer(s) using code-reviewer agent type' lack executable examples — no actual tool invocations, TaskList schemas, or SendMessage payloads are shown. The standalone mode is particularly vague.

2 / 3

Workflow Clarity

Both operating modes have numbered steps with a clear sequence, and the feedback triage (Critical/Important/Minor) is useful. However, there are no explicit validation checkpoints — no step verifying that reviews are complete before proceeding, no feedback loop for re-review after fixes, and the standalone mode's workflow is underspecified.

2 / 3

Progressive Disclosure

Content is well-structured with clear sections, references to external files are one level deep and clearly signaled (agents/code-reviewer.md, code-review/code-reviewer.md, kit:team-orchestration), and the Integration section serves as a clean navigation hub. The scope section appropriately defers general code quality concerns to other plugins.

3 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
shousper/claude-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.