CtrlK
BlogDocsLog inGet started
Tessl Logo

idea-creator

Generate and rank research ideas given a broad direction. Use when user says "找idea", "brainstorm ideas", "generate research ideas", "what can we work on", or wants to explore a research area for publishable directions.

90

Quality

87%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Critical

Do not install without reviewing

SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted description with strong trigger term coverage (including bilingual terms) and a clear 'Use when' clause that makes it easy for Claude to select appropriately. The main weakness is that the 'what' portion could be more specific about the concrete actions performed beyond just 'generate and rank.' Overall, it is a solid description that would perform well in a multi-skill selection scenario.

Suggestions

Expand the capability description with more specific actions, e.g., 'Generate, evaluate novelty/feasibility, and rank research ideas with justifications given a broad direction.'

DimensionReasoningScore

Specificity

The description names the domain (research ideas) and two actions (generate and rank), but doesn't elaborate on specific concrete sub-actions like evaluating novelty, assessing feasibility, producing structured comparisons, or outputting ranked lists with justifications.

2 / 3

Completeness

Clearly answers both 'what' (generate and rank research ideas given a broad direction) and 'when' (explicit 'Use when' clause with multiple trigger phrases and a general condition about exploring research areas).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including bilingual phrases ('找idea'), common phrasings ('brainstorm ideas', 'generate research ideas', 'what can we work on'), and a broader contextual trigger ('explore a research area for publishable directions').

3 / 3

Distinctiveness Conflict Risk

The combination of research idea generation, ranking, and publishable directions creates a clear niche. The specific trigger terms like '找idea' and 'generate research ideas' are unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured, highly actionable skill that defines a comprehensive research idea generation pipeline with clear phases, concrete templates, and appropriate validation checkpoints. Its main weakness is moderate verbosity — some sections could be tightened (e.g., the composing section, duplicate review tracing mentions) without losing clarity. Overall it's a strong skill that effectively balances comprehensiveness with usability.

Suggestions

Consolidate the 'Review Tracing' section at the bottom with the inline review tracing instructions in Phases 2 and 4 to eliminate redundancy.

The 'Composing with Other Skills' section largely restates what's already embedded in the workflow — consider trimming it to a single line or removing it entirely.

DimensionReasoningScore

Conciseness

The skill is quite long (~300 lines) and includes some redundancy (e.g., review tracing mentioned twice, composing with other skills section restates what's already clear from the workflow). However, most content is genuinely instructive and not explaining things Claude already knows — it's detailed because the workflow is complex, not because it's padded.

2 / 3

Actionability

The skill provides concrete, executable guidance throughout: specific bash commands for wiki resolution, exact spawn_agent/send_input message templates, precise output file formats with markdown templates, clear constants with override syntax, and specific search strategies. The pilot experiment section includes concrete GPU allocation and metric thresholds.

3 / 3

Workflow Clarity

The 7-phase workflow is clearly sequenced with explicit validation checkpoints: Phase 3 filters ideas before deep validation, Phase 4 includes novelty checks and devil's advocate review, Phase 5 has timeout/budget guards and kill conditions, and the output report includes explicit go/no-go signals from pilots. Feedback loops are present (re-rank based on empirical evidence, fix and re-validate).

3 / 3

Progressive Disclosure

The skill appropriately references external files for shared protocols (output-versioning.md, output-manifest.md, output-language.md, review-tracing.md) and companion skills (/novelty-check, /research-lit, /run-experiment). References are one level deep and clearly signaled. The main content stays focused on the idea generation workflow while pointing elsewhere for reusable protocols.

3 / 3

Total

11

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
wanshuiyin/Auto-claude-code-research-in-sleep
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.