Get a deep critical review of research from Claude via claude-review MCP. Use when user says "review my research", "help me review", "get external review", or wants critical feedback on research ideas, papers, or experimental results.
86
83%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Override for Codex users who want Claude Code, not a second Codex agent, to act as the reviewer. Install this package after
skills/skills-codex/*.
claude-review MCP (high-rigor review)Get a multi-round critical review of research work from an external LLM with maximum reasoning depth.
claude-review — Claude reviewer invoked through the local claude-review MCP bridge. Set CLAUDE_REVIEW_MODEL if you need a specific Claude model override.skills/skills-codex/* into ~/.codex/skills/.skills/skills-codex-claude-review/* into ~/.codex/skills/ and allow it to overwrite the same skill names.codex mcp add claude-review -- python3 ~/.codex/mcp-servers/claude-review/server.pymcp__claude-review__review_start, mcp__claude-review__review_reply_start, and mcp__claude-review__review_status.Before calling the external reviewer, compile a comprehensive briefing:
Send a detailed prompt with high-rigor review:
mcp__claude-review__review_start:
prompt: |
[Full research context + specific questions]
Please act as a senior ML reviewer (NeurIPS/ICML level). Identify:
1. Logical gaps or unjustified claims
2. Missing experiments that would strengthen the story
3. Narrative weaknesses
4. Whether the contribution is sufficient for a top venue
Please be brutally honest.After this start call, immediately save the returned jobId and poll mcp__claude-review__review_status with a bounded waitSeconds until done=true. Treat the completed status payload's response as the reviewer output, and save the completed threadId for any follow-up round.
Use mcp__claude-review__review_reply_start with the saved completed threadId, then poll mcp__claude-review__review_status with the returned jobId until done=true to continue the conversation:
For each round:
Key follow-up patterns:
Stop iterating when:
Save the full interaction and conclusions to a review document in the project root:
Update project memory/notes with key review conclusions.
threadId for potential future resumption"I'm going to present a complete ML research project for your critical review. Please act as a senior ML reviewer (NeurIPS/ICML level)..."
"Please design the minimal additional experiment package that gives the highest acceptance lift per GPU week. Our compute: [describe]. Be very specific about configurations."
"Please turn this into a concrete paper outline with section-by-section claims and figure plan."
"Please give me a results-to-claims matrix: what claim is allowed under each possible outcome of experiments X and Y?"
"Please write a mock NeurIPS review with: Summary, Strengths, Weaknesses, Questions for Authors, Score, Confidence, and What Would Move Toward Accept."
dc00dfb
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.