Agent skill for code-review-swarm - invoke with $agent-code-review-swarm
Install with Tessl CLI
npx tessl i github:ruvnet/claude-flow --skill agent-code-review-swarm30
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is critically deficient across all dimensions. It functions more as an invocation instruction than a skill description, providing no information about what the skill does, what capabilities it offers, or when Claude should select it. Without concrete actions or trigger terms, Claude cannot effectively choose this skill from a pool of alternatives.
Suggestions
Add specific capabilities: describe what the code review swarm actually does (e.g., 'Performs multi-perspective code review analyzing security, performance, and maintainability issues')
Add explicit trigger guidance with a 'Use when...' clause containing natural terms like 'review my code', 'PR review', 'check this pull request', 'code quality'
Remove or relocate the invocation syntax ('invoke with $agent-code-review-swarm') as it doesn't help with skill selection and wastes description space
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for code-review-swarm' is abstract and does not describe what the skill actually does (e.g., reviewing code, finding bugs, suggesting improvements). | 1 / 3 |
Completeness | The description fails to answer both 'what does this do' and 'when should Claude use it'. It only provides invocation syntax ('invoke with $agent-code-review-swarm') without explaining capabilities or triggers. | 1 / 3 |
Trigger Term Quality | The only potentially relevant term is 'code-review' embedded in the skill name, but no natural user keywords are provided. Users would say 'review my code', 'check for bugs', 'PR review' - none of which appear. | 1 / 3 |
Distinctiveness Conflict Risk | The description is so vague that it provides no distinguishing characteristics. 'Agent skill' is generic and could apply to any agent-based skill, creating high conflict risk. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill suffers from severe verbosity and poor organization, presenting what should be a concise overview as an exhaustive reference document. While it contains some actionable gh CLI examples, much of the content relies on fictional tooling (ruv-swarm) and descriptive JSON that doesn't provide executable guidance. The lack of progressive disclosure and validation checkpoints makes this difficult to use effectively.
Suggestions
Reduce to a concise overview (under 100 lines) with core gh CLI patterns, moving agent configurations and templates to separate reference files
Remove or clearly mark the fictional 'npx ruv-swarm' commands, focusing on real gh CLI workflows that Claude can execute
Delete the descriptive JSON blocks listing what agents check - Claude already knows what security/performance reviews entail
Add explicit validation steps after posting reviews (e.g., verify comment was posted, check PR status updated correctly)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at 400+ lines with massive redundancy. Multiple sections repeat similar concepts (review agents defined in both bash and JavaScript), includes obvious information Claude knows (what security checks are, what performance metrics mean), and the JSON blocks describing agent capabilities are purely descriptive rather than actionable. | 1 / 3 |
Actionability | Contains concrete bash commands and code examples, but relies heavily on a fictional 'npx ruv-swarm' tool that doesn't exist. The gh CLI examples are more realistic and executable, but many code blocks are illustrative rather than copy-paste ready. The JavaScript agent examples are descriptive JSON, not executable code. | 2 / 3 |
Workflow Clarity | Individual sections show steps but lack explicit validation checkpoints. The GitHub Actions workflow is the clearest sequence, but most other sections present commands without error handling or verification steps. No feedback loops for when reviews fail or need re-running. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with everything inline. References to external files (swarm-pr.md, workflow-automation.md) only appear at the very end. Content that should be in separate reference files (agent configurations, templates, YAML examples) is all embedded, making the skill overwhelming and hard to navigate. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (543 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.