Autonomous multi-round research review loop. Repeatedly reviews using Claude Code via claude-review MCP, implements fixes, and re-reviews until positive assessment or max rounds reached. Use when user says "auto review loop", "review until it passes", or wants autonomous iterative improvement.
90
88%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly communicates a specific autonomous review workflow, includes explicit trigger terms, and answers both what and when. The description is concise yet comprehensive, mentioning the specific tool (claude-review MCP), the iterative nature, and termination conditions. It uses proper third-person voice throughout.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'reviews using Claude Code via claude-review MCP', 'implements fixes', 're-reviews until positive assessment or max rounds reached'. Describes a clear multi-step process with termination conditions. | 3 / 3 |
Completeness | Clearly answers both what ('Autonomous multi-round research review loop. Repeatedly reviews using Claude Code via claude-review MCP, implements fixes, and re-reviews until positive assessment or max rounds reached') and when ('Use when user says "auto review loop", "review until it passes", or wants autonomous iterative improvement'). | 3 / 3 |
Trigger Term Quality | Includes natural trigger phrases users would say: 'auto review loop', 'review until it passes', 'autonomous iterative improvement'. These are realistic phrases a user would use when requesting this specific workflow. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche: autonomous iterative review loops using a specific MCP tool. The trigger terms ('auto review loop', 'review until it passes') are specific enough to avoid conflicts with simple one-shot review skills or general code improvement skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted, highly actionable skill for an autonomous review loop with clear multi-phase workflow, explicit validation checkpoints, and robust state recovery. Its main weaknesses are moderate verbosity from repeated MCP polling instructions and a monolithic structure that could benefit from splitting auxiliary content (prompt templates, state schemas) into separate referenced files. Overall it's a strong skill that would serve Claude well in executing this complex autonomous workflow.
Suggestions
Deduplicate the MCP polling instructions (repeated 3 times) by defining the pattern once and referencing it, e.g., 'Follow the standard poll pattern (see above)' for subsequent mentions.
Consider extracting prompt templates and the REVIEW_STATE.json schema into separate referenced files (e.g., PROMPTS.md, STATE_SCHEMA.md) to improve progressive disclosure and reduce the main file's length.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly long (~200 lines) and includes some redundancy — the polling instructions for mcp__claude-review__review_status are repeated three times nearly verbatim. The state persistence section and human checkpoint logic add significant length, though most content is genuinely instructive. Some tightening is possible. | 2 / 3 |
Actionability | Highly actionable with specific MCP tool calls, concrete JSON schemas for state files, exact prompt templates for the reviewer, detailed parsing rules for human checkpoint input, and explicit stop conditions with numeric thresholds. Copy-paste ready throughout. | 3 / 3 |
Workflow Clarity | Excellent multi-phase workflow (A→B→C→D→E) with clear sequencing, explicit validation/stop conditions, state recovery logic for compaction, human checkpoint gates, and error handling guidance (e.g., large file fallback, stale state detection). Feedback loops are well-defined with re-review after fixes. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear headers and phases, but everything is in a single monolithic file. The prompt templates, state schema, and Feishu notification details could be split into referenced files. The collapsible details block for raw responses is a nice touch but the overall file is dense. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
dc00dfb
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.