CtrlK
BlogDocsLog inGet started
Tessl Logo

auto-review-loop-minimax

Autonomous multi-round research review loop using MiniMax API. Use when you want to use MiniMax instead of Codex MCP for external review. Trigger with "auto review loop minimax" or "minimax review".

81

Quality

76%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/skills-codex/auto-review-loop-minimax/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is strong on completeness, trigger terms, and distinctiveness, clearly specifying when and why to use this skill with explicit trigger phrases and differentiation from Codex MCP. Its main weakness is a lack of specificity about what concrete actions the review loop performs—what does 'multi-round research review' actually entail in terms of steps or outputs?

Suggestions

Add specific concrete actions describing what the review loop does, e.g., 'Sends code or documents to MiniMax API for iterative feedback, collects review comments, and applies suggested changes across multiple rounds.'

DimensionReasoningScore

Specificity

It names the domain ('multi-round research review loop') and mentions the MiniMax API, but doesn't list specific concrete actions like what the review entails, what outputs are produced, or what steps are performed.

2 / 3

Completeness

Clearly answers both 'what' (autonomous multi-round research review loop using MiniMax API) and 'when' (when wanting MiniMax instead of Codex MCP for external review, triggered by specific phrases).

3 / 3

Trigger Term Quality

Includes explicit trigger phrases ('auto review loop minimax', 'minimax review') and natural keywords like 'MiniMax API', 'external review', and 'Codex MCP' that help distinguish when to use this skill.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive due to the specific mention of MiniMax API, explicit contrast with Codex MCP, and unique trigger phrases. Very unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is highly actionable with excellent workflow clarity, featuring explicit state recovery, validation checkpoints, and clear phase sequencing. However, it suffers significantly from verbosity — the same curl commands, MCP invocations, and system prompts are repeated three times across different sections, roughly doubling the token cost. Better factoring of repeated content into single definitions or separate reference files would dramatically improve token efficiency.

Suggestions

Define the curl command and MCP tool invocation once (e.g., in the API Configuration section) and reference it in Phase A and the Prompt Template section instead of repeating the full code blocks three times.

Define the system prompt string as a named constant at the top (alongside other constants) rather than duplicating it verbatim in every code block.

Move the Round 2+ prompt templates to a separate reference file (e.g., REVIEW_PROMPTS.md) since they are lengthy and only needed for reference, not for understanding the workflow.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~250+ lines with significant redundancy. The curl fallback and MCP examples are repeated three times (API Configuration, Phase A, and Prompt Template sections), nearly doubling the content. The system prompt string is duplicated verbatim across multiple code blocks. Much of this could be defined once and referenced.

1 / 3

Actionability

The skill provides fully executable curl commands, concrete MCP tool invocations, specific JSON schemas for state files, exact file paths, and complete prompt templates. Every step has copy-paste ready examples with specific parameters.

3 / 3

Workflow Clarity

The workflow is clearly sequenced through Initialization → Phase A-E loop → Termination with explicit validation checkpoints (stop condition in Phase B), error recovery (state persistence for compaction recovery, stale state handling), and feedback loops (review → fix → re-review). The initialization logic handles fresh start, resume, and stale state cases explicitly.

3 / 3

Progressive Disclosure

The skill references shared protocols at the end via links (output versioning, manifest, language), which is good. However, the main body is a monolithic wall of text that could benefit from splitting the repeated prompt templates and API configuration into separate reference files. The curl fallback being repeated three times is a clear sign content should be factored out.

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
wanshuiyin/Auto-claude-code-research-in-sleep
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.