CtrlK
BlogDocsLog inGet started
Tessl Logo

github-code-review

Comprehensive GitHub code review with AI-powered swarm coordination

49

3.69x
Quality

28%

Does it follow best practices?

Impact

85%

3.69x

Average score across 3 eval scenarios

SecuritybySnyk

Critical

Do not install without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./.claude/skills/github-code-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

22%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is too vague and relies on buzzwords ('comprehensive', 'AI-powered swarm coordination') rather than concrete capabilities. It fails to explain what specific actions the skill performs and provides no guidance on when Claude should select it. The GitHub and code review terms provide minimal discoverability but are insufficient for reliable skill selection.

Suggestions

Replace vague terms with specific actions (e.g., 'Reviews pull requests, identifies bugs, suggests improvements, checks code style')

Add a 'Use when...' clause with trigger terms like 'review my PR', 'check this pull request', 'GitHub review', 'code feedback'

Remove or explain 'swarm coordination' - either describe what it means concretely or remove the jargon entirely

DimensionReasoningScore

Specificity

The description uses vague language like 'comprehensive' and 'AI-powered swarm coordination' without listing any concrete actions. It doesn't specify what the skill actually does (e.g., analyze PRs, comment on code, check for bugs).

1 / 3

Completeness

The description only vaguely addresses 'what' (code review) and completely lacks any 'when' guidance. There is no 'Use when...' clause or explicit trigger guidance.

1 / 3

Trigger Term Quality

'GitHub' and 'code review' are natural terms users might say, but the description lacks common variations like 'PR', 'pull request', 'review my code', or 'check my changes'.

2 / 3

Distinctiveness Conflict Risk

'GitHub code review' provides some specificity that distinguishes it from generic code skills, but 'comprehensive' is vague and 'AI-powered swarm coordination' is jargon that doesn't help differentiate when to use this vs other code review tools.

2 / 3

Total

6

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is extremely verbose and repetitive, violating token efficiency principles. While it provides concrete command examples, many appear to be for a hypothetical tool rather than executable instructions. The document would benefit from being reduced to ~100 lines focusing on the core workflow with references to detailed documentation.

Suggestions

Reduce content by 80%+ - remove redundant examples, decorative elements, and explanations of concepts Claude already knows (what security reviews check, what performance metrics mean)

Clarify whether 'npx ruv-swarm' is a real tool and if so, add installation/setup instructions; if hypothetical, reframe as a pattern rather than executable commands

Add explicit validation steps to workflows (e.g., 'Verify review posted: gh pr view 123 --json comments | jq .comments[-1]')

Move detailed configurations (custom agents, webhook handlers, CI/CD examples) to separate referenced files rather than embedding inline

DimensionReasoningScore

Conciseness

Extremely verbose at 800+ lines with massive redundancy. The same concepts (review initialization, agent spawning) are repeated dozens of times. Contains extensive explanations Claude doesn't need (what security checks are, what performance metrics mean) and decorative elements (emojis, badges) that waste tokens.

1 / 3

Actionability

Provides concrete bash commands and code examples, but relies heavily on a hypothetical 'npx ruv-swarm' tool without explaining its actual availability or installation. Many commands appear to be pseudocode for a non-existent CLI rather than executable instructions.

2 / 3

Workflow Clarity

Multi-step workflows are present but lack explicit validation checkpoints. The 'Complete Review Workflow' section shows steps but doesn't include verification that each step succeeded before proceeding. No error recovery guidance for when commands fail.

2 / 3

Progressive Disclosure

Uses collapsible sections which is good, but the main document is monolithic with everything inline. References to 'Related Skills' at the bottom are vague. Content that should be in separate files (custom agent implementation, webhook handlers) is embedded in the main skill.

2 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (1141 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
ruvnet/ruv-FANN
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.