Creative research ideation and exploration. Use for open-ended brainstorming sessions, exploring interdisciplinary connections, challenging assumptions, or identifying research gaps. Best for early-stage research planning when you do not have specific observations yet. For formulating testable hypotheses from data use hypothesis-generation.
73
62%
Does it follow best practices?
Impact
89%
1.02xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/scientific-brainstorming/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted description that clearly defines its scope, provides explicit 'use when' guidance, and distinguishes itself from a related skill (hypothesis-generation). The main weakness is that the listed capabilities, while covering the domain well, are somewhat abstract rather than describing concrete discrete actions. The trigger terms are natural and varied, making it easy for Claude to match user requests to this skill.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (research ideation) and some actions (brainstorming, exploring connections, challenging assumptions, identifying gaps), but the actions are somewhat abstract and not as concrete as specific operations like 'extract text' or 'fill forms'. | 2 / 3 |
Completeness | Clearly answers both what (creative research ideation, brainstorming, exploring connections, challenging assumptions, identifying gaps) and when (open-ended brainstorming sessions, early-stage research planning without specific observations). Also includes a helpful disambiguation clause distinguishing it from hypothesis-generation. | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms users would say: 'brainstorming', 'interdisciplinary connections', 'research gaps', 'early-stage research planning', 'challenging assumptions', and 'open-ended'. These cover a good range of how users would naturally describe this need. | 3 / 3 |
Distinctiveness Conflict Risk | Clearly carves out a distinct niche by specifying 'early-stage research planning when you do not have specific observations yet' and explicitly differentiating from the hypothesis-generation skill. This boundary-setting reduces conflict risk significantly. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is well-structured with a clear five-phase workflow but suffers significantly from verbosity—it explains brainstorming concepts and conversational techniques that Claude already understands deeply. The content reads more like a training manual for a human facilitator than a concise skill file for an AI. The actionability is moderate, providing useful prompts and techniques but lacking concrete output formats or measurable criteria for success.
Suggestions
Reduce content by 60-70%: remove explanations of concepts Claude already knows (e.g., what brainstorming is, how to show curiosity, how to be encouraging) and focus only on the specific workflow phases and techniques unique to this skill.
Add a concrete output format or template for the synthesis phase (e.g., a structured summary format with sections for top ideas, connections discovered, and next steps).
Move the detailed brainstorming techniques (Phase 2 techniques list, Adaptive Techniques section) into the referenced brainstorming_methods.md file and keep only brief pointers in the main skill.
Add validation criteria for phase transitions—e.g., 'Move to Phase 3 when at least 8-10 distinct ideas have been generated' to provide concrete checkpoints.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~200+ lines, explaining conversational techniques and brainstorming principles that Claude already knows well. Phrases like 'Be comfortable with silence—give space for thinking' and extensive lists of example questions add little value. Much of this reads as a tutorial on brainstorming rather than actionable instructions Claude needs. | 1 / 3 |
Actionability | The skill provides structured phases and example questions/prompts, which give some concrete guidance. However, there are no executable code examples, no specific output formats, and the guidance remains largely at the level of conversational suggestions rather than precise, copy-paste-ready instructions. It describes what to do conceptually but lacks specificity in deliverables. | 2 / 3 |
Workflow Clarity | The five-phase workflow is clearly sequenced and logically ordered, with transition guidance between phases. However, there are no validation checkpoints or feedback loops—no way to verify whether a phase was successful before moving on, and no criteria for when to loop back or skip phases. | 2 / 3 |
Progressive Disclosure | The skill references 'references/brainstorming_methods.md' appropriately, but no bundle files are provided, so the reference is unverifiable. The main content is monolithic with extensive inline detail that could be split into reference files (e.g., the adaptive techniques section, the detailed brainstorming techniques in Phase 2). | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
cbcae7b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.