CtrlK
BlogDocsLog inGet started
Tessl Logo

auto-review-loop-minimax

Autonomous multi-round research review loop using MiniMax API. Use when you want to use MiniMax instead of Codex MCP for external review. Trigger with "auto review loop minimax" or "minimax review".

81

Quality

76%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Critical

Do not install without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/skills-codex/auto-review-loop-minimax/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is strong on completeness, trigger terms, and distinctiveness, clearly specifying when and why to use this skill with explicit trigger phrases and differentiation from Codex MCP. Its main weakness is a lack of specificity about what concrete actions the 'research review loop' actually performs—what does it review, what outputs does it produce, what does 'multi-round' entail?

Suggestions

Add specific concrete actions describing what the review loop does, e.g., 'Sends code or documents to MiniMax API for iterative feedback, collects review comments, and applies suggested changes across multiple rounds.'

DimensionReasoningScore

Specificity

Names the domain ('research review loop') and the tool ('MiniMax API'), but does not list specific concrete actions like what the review entails, what inputs/outputs are involved, or what steps the loop performs.

2 / 3

Completeness

Clearly answers both 'what' (autonomous multi-round research review loop using MiniMax API) and 'when' (use when wanting MiniMax instead of Codex MCP for external review, triggered by specific phrases).

3 / 3

Trigger Term Quality

Includes explicit trigger phrases ('auto review loop minimax', 'minimax review') and natural keywords like 'MiniMax', 'review', and 'Codex MCP' that help distinguish when to use this skill. These are terms a user would naturally say.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive due to the specific mention of MiniMax API, contrast with Codex MCP, and unique trigger phrases. Very unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a thorough, highly actionable skill with excellent workflow clarity and state management for a complex multi-round autonomous loop. However, it suffers significantly from verbosity — the same API call patterns and prompt templates are duplicated three times, roughly doubling the token cost. The content would benefit greatly from defining patterns once and referencing them, and from splitting prompt templates into a separate file.

Suggestions

Eliminate duplication by defining the MCP tool call and curl command patterns once (in API Configuration), then referencing them in Phase A and the Prompt Template section instead of repeating the full code blocks.

Extract the round 2+ prompt templates into a separate file (e.g., REVIEW_PROMPTS.md) and reference it from the main skill to reduce token footprint.

Remove the explanatory note about why MiniMax is used instead of Codex — this is context for the skill author, not actionable guidance for Claude.

Consolidate the review prompt content (system message, scoring criteria) into a single constants/templates section rather than embedding it in multiple code blocks.

DimensionReasoningScore

Conciseness

The skill is extremely verbose with massive duplication. The curl fallback and MCP method are shown 3 separate times each (API Configuration, Phase A, and Prompt Template sections), nearly tripling the content length. The review prompt text is repeated almost verbatim across these sections. Much of this could be defined once and referenced.

1 / 3

Actionability

The skill provides fully executable curl commands, concrete MCP tool invocations, specific JSON schemas for state files, exact prompt templates, and detailed markdown templates for documentation. Everything is copy-paste ready with clear parameter placeholders.

3 / 3

Workflow Clarity

The workflow is exceptionally well-structured with clear phases (A through E), explicit stop conditions, state recovery logic with multiple edge cases handled (stale state, completed state, fresh start), validation checkpoints (parse assessment before proceeding), and a feedback loop (review → fix → re-review). Prioritization rules and error handling (large file fallback) are included.

3 / 3

Progressive Disclosure

The content is a monolithic wall of text at ~250+ lines with no references to external files. The prompt templates for round 2+ could be in a separate file, and the API configuration details could be extracted. The structure within the file is good (clear headers), but the sheer volume of inline content hurts discoverability.

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
wanshuiyin/Auto-claude-code-research-in-sleep
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.