Comprehensive GitHub code review with AI-powered swarm coordination
Install with Tessl CLI
npx tessl i github:ruvnet/agentic-flow --skill github-code-review38
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description relies heavily on buzzwords ('comprehensive', 'AI-powered swarm coordination') without explaining concrete capabilities or when to use the skill. It mentions GitHub code review which provides some domain context, but lacks the specificity and explicit trigger guidance needed for Claude to reliably select this skill from a large skill library.
Suggestions
Replace vague terms with specific actions (e.g., 'Reviews pull requests, analyzes code changes, identifies bugs, suggests improvements, checks for security issues').
Add a 'Use when...' clause with natural trigger terms like 'PR review', 'pull request', 'review my code', 'check this PR', 'GitHub review'.
Remove or clarify 'AI-powered swarm coordination' - either explain what this means in practical terms or remove the jargon entirely.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague, abstract language like 'comprehensive' and 'AI-powered swarm coordination' without listing any concrete actions. It doesn't specify what the skill actually does (e.g., review PRs, check code style, suggest fixes). | 1 / 3 |
Completeness | The description only vaguely addresses 'what' (code review) and completely lacks any 'when' guidance. There is no 'Use when...' clause or explicit trigger guidance. | 1 / 3 |
Trigger Term Quality | 'GitHub' and 'code review' are natural terms users might say, but 'AI-powered swarm coordination' is technical jargon users wouldn't naturally use. Missing common variations like 'PR', 'pull request', 'review changes'. | 2 / 3 |
Distinctiveness Conflict Risk | 'GitHub code review' provides some specificity to distinguish from generic code review skills, but 'comprehensive' is vague and could overlap with other GitHub or code review skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill suffers from severe verbosity, repeating similar command patterns dozens of times and explaining concepts Claude already understands. While it provides concrete bash commands, they depend on an undocumented 'ruv-swarm' tool making true actionability questionable. The document would benefit from being split into a concise overview with references to detailed guides, and removing redundant examples.
Suggestions
Reduce to under 200 lines by removing redundant examples - one example per agent type is sufficient, not three variations of the same review-init command
Split into SKILL.md (quick start + overview) with separate files: AGENTS.md, WORKFLOWS.md, CONFIGURATION.md, TROUBLESHOOTING.md
Add explicit validation steps to workflows: 'Verify review posted: gh pr view 123 --json reviews' after posting reviews
Remove explanatory text about what security/performance/architecture reviews are - Claude knows these concepts
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at 800+ lines with massive redundancy. The same concepts (review-init, agent spawning, gh CLI usage) are repeated dozens of times. Contains extensive explanatory text, emoji decorations, and collapsible sections that pad rather than inform. Claude doesn't need explanations of what security reviews or performance analysis are. | 1 / 3 |
Actionability | Provides concrete bash commands and code examples that appear executable, but relies heavily on a hypothetical 'npx ruv-swarm' tool with undocumented behavior. Many commands show flags and options without explaining what they actually do or return. The webhook handler and custom agent examples are more complete but still lack context on integration. | 2 / 3 |
Workflow Clarity | Workflows are present but scattered across the document with no clear validation checkpoints. The 'Complete Review Workflow' in Quick Start shows steps but lacks error handling or verification. Critical operations like posting reviews or merging PRs have no explicit validation steps or rollback procedures. | 2 / 3 |
Progressive Disclosure | Monolithic wall of content with no external file references. Uses collapsible sections but they contain inline content rather than linking to separate files. The Table of Contents links to sections within the same massive document. Content that should be in separate reference files (agent configurations, workflow templates, troubleshooting) is all inline. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (1141 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.