Workflow 1: Full idea discovery pipeline. Orchestrates research-lit → idea-creator → novelty-check → research-review to go from a broad research direction to validated, pilot-tested ideas. Use when user says \"找idea全流程\", \"idea discovery pipeline\", \"从零开始找方向\", or wants the complete idea exploration workflow.
94
92%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Critical
Do not install without reviewing
Override for Codex users who want Gemini, not a second Codex agent, to act as the reviewer. Install this package after
skills/skills-codex/*.
Orchestrate a complete idea discovery workflow for: $ARGUMENTS
This skill chains sub-skills into a single automated pipeline:
/research-lit → /idea-creator → /novelty-check → /research-review → /research-refine-pipeline
(survey) (brainstorm) (verify novel) (critical feedback) (refine method + plan experiments)Each phase builds on the previous one's output. The final deliverables are a validated IDEA_REPORT.md with ranked ideas, plus a refined proposal (refine-logs/FINAL_PROPOSAL.md) and experiment plan (refine-logs/EXPERIMENT_PLAN.md) for the top idea.
false to always wait for explicit user confirmation.gemini-review — Gemini reviewer invoked through the local gemini-review MCP bridge. Passed to the reviewer-aware sub-skills installed by this overlay.true, /research-lit downloads the top relevant arXiv PDFs during Phase 1. When false (default), only fetches metadata. Passed through to /research-lit.💡 These are defaults. Override by telling the skill, e.g.,
/idea-discovery "topic" — pilot budget: 4h per idea, 20h totalor/idea-discovery "topic" — arxiv download: true.
Invoke /research-lit to map the research landscape:
/research-lit "$ARGUMENTS"What this does:
🚦 Checkpoint: Present the landscape summary to the user. Ask:
📚 Literature survey complete. Here's what I found:
- [key findings, gaps, open problems]
Does this match your understanding? Should I adjust the scope before generating ideas?
(If no response, I'll proceed with the top-ranked direction.)/research-lit with adjusted scope, and present again. Repeat until the user is satisfied.Invoke /idea-creator with the landscape context:
/idea-creator "$ARGUMENTS"What this does:
/idea-creator overlayIDEA_REPORT.md🚦 Checkpoint: Present IDEA_REPORT.md ranked ideas to the user. Ask:
💡 Generated X ideas, filtered to Y, piloted Z. Top results:
1. [Idea 1] — Pilot: POSITIVE (+X%)
2. [Idea 2] — Pilot: WEAK POSITIVE (+Y%)
3. [Idea 3] — Pilot: NEGATIVE, eliminated
Which ideas should I validate further? Or should I regenerate with different constraints?
(If no response, I'll proceed with the top-ranked ideas.)For each top idea (positive pilot signal), run a thorough novelty check:
/novelty-check "[top idea 1 description]"
/novelty-check "[top idea 2 description]"What this does:
/novelty-check overlayUpdate IDEA_REPORT.md with deep novelty results. Eliminate any idea that turns out to be already published.
For the surviving top idea(s), get brutal feedback:
/research-review "[top idea with hypothesis + pilot results]"What this does:
gemini-review MCP bridgeUpdate IDEA_REPORT.md with reviewer feedback and revised plan.
After review, refine the top idea into a concrete proposal and plan experiments:
/research-refine-pipeline "[top idea description + pilot results + reviewer feedback]"What this does:
refine-logs/FINAL_PROPOSAL.md, refine-logs/EXPERIMENT_PLAN.md, refine-logs/EXPERIMENT_TRACKER.md🚦 Checkpoint: Present the refined proposal summary:
🔬 Method refined and experiment plan ready:
- Problem anchor: [anchored problem]
- Method thesis: [one sentence]
- Dominant contribution: [what's new]
- Must-run experiments: [N blocks]
- First 3 runs to launch: [list]
Proceed to implementation? Or adjust the proposal?/research-refine for another round./research-refine only (skip /experiment-plan) and note remaining risks in the report.Finalize IDEA_REPORT.md with all accumulated information:
# Idea Discovery Report
**Direction**: $ARGUMENTS
**Date**: [today]
**Pipeline**: research-lit → idea-creator → novelty-check → research-review → research-refine-pipeline
## Executive Summary
[2-3 sentences: best idea, key evidence, recommended next step]
## Literature Landscape
[from Phase 1]
## Ranked Ideas
[from Phase 2, updated with Phase 3-4 results]
### 🏆 Idea 1: [title] — RECOMMENDED
- Pilot: POSITIVE (+X%)
- Novelty: CONFIRMED (closest: [paper], differentiation: [what's different])
- Reviewer score: X/10
- Next step: implement full experiment → /auto-review-loop
### Idea 2: [title] — BACKUP
...
## Eliminated Ideas
[ideas killed at each phase, with reasons]
## Refined Proposal
- Proposal: `refine-logs/FINAL_PROPOSAL.md`
- Experiment plan: `refine-logs/EXPERIMENT_PLAN.md`
- Tracker: `refine-logs/EXPERIMENT_TRACKER.md`
## Next Steps
- [ ] /run-experiment to deploy experiments from the plan
- [ ] /auto-review-loop to iterate until submission-ready
- [ ] Or invoke /research-pipeline for the complete end-to-end flowLarge file handling: If the Write tool fails due to file size, immediately retry using Bash (cat << 'EOF' > file) to write in chunks. Do NOT ask the user for permission — just do it silently.
Don't skip phases. Each phase filters and validates — skipping leads to wasted effort later.
Checkpoint between phases. Briefly summarize what was found before moving on.
Kill ideas early. It's better to kill 10 bad ideas in Phase 3 than to implement one and fail.
Empirical signal > theoretical appeal. An idea with a positive pilot outranks a "sounds great" idea without evidence.
Document everything. Dead ends are just as valuable as successes for future reference.
Be honest with the reviewer. Include negative results and failed pilots in the review prompt.
Feishu notifications are optional. If ~/.codex/feishu.json exists, send checkpoint at each phase transition and pipeline_done at final report. If absent/off, skip silently.
After this pipeline produces a validated top idea:
/idea-discovery "direction" ← you are here (Workflow 1, includes method refinement + experiment planning)
/run-experiment ← deploy experiments from the plan
/auto-review-loop "top idea" ← Workflow 2: iterate until submission-ready
Or use /research-pipeline for the full end-to-end flow.dc00dfb
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.