Orchestrate parallel scientist agents for comprehensive analysis with AUTO mode
52
42%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/sciomc/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too abstract and jargon-heavy to be useful for skill selection. It fails to explain what specific tasks the skill performs, what 'scientist agents' actually do, or when Claude should select this skill. The technical terminology would not match natural user requests.
Suggestions
Replace abstract terms with concrete actions - specify what types of analysis the scientist agents perform (e.g., 'statistical analysis', 'data exploration', 'hypothesis testing')
Add a 'Use when...' clause with natural trigger terms users would actually say (e.g., 'Use when analyzing datasets, running experiments, or when the user needs multi-perspective data analysis')
Explain what 'AUTO mode' means in practical terms - what does it enable or automate for the user?
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'orchestrate' and 'comprehensive analysis' without specifying concrete actions. It doesn't explain what the scientist agents do or what kind of analysis is performed. | 1 / 3 |
Completeness | Missing both clear 'what' (what specific analysis capabilities) and 'when' (no explicit trigger guidance or use cases). The description is too abstract to answer either question. | 1 / 3 |
Trigger Term Quality | Contains technical jargon ('orchestrate parallel scientist agents', 'AUTO mode') that users would not naturally say. No common user-facing keywords are present. | 1 / 3 |
Distinctiveness Conflict Risk | The mention of 'parallel scientist agents' and 'AUTO mode' provides some distinctiveness, but 'comprehensive analysis' is generic enough to potentially conflict with other analysis-related skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, highly actionable skill for orchestrating parallel research agents. The workflow is clearly sequenced with proper validation checkpoints and the code examples are executable. However, the document is quite long and could benefit from splitting detailed reference material (schemas, regex patterns, templates) into separate files, and some sections contain redundant examples that could be consolidated.
Suggestions
Extract the JSON schemas (state file format), regex patterns (tag extraction), and report template into separate reference files (e.g., SCHEMAS.md, PATTERNS.md, TEMPLATES.md) and link to them from the main skill
Consolidate the parallel execution pattern examples - the 'Independent Dataset Analysis' and 'Hypothesis Battery' sections demonstrate the same concept and could be merged into one example with a note about use cases
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is comprehensive but includes some redundancy (e.g., multiple similar code examples for parallel execution patterns) and could be tightened. The routing tables and tag extraction sections are useful but verbose. | 2 / 3 |
Actionability | Provides fully executable Task() invocations, concrete regex patterns for tag extraction, complete JSON schemas for state files, and specific command examples. Copy-paste ready throughout. | 3 / 3 |
Workflow Clarity | Clear 4-stage workflow (Decomposition → Execution → Verification → Synthesis) with explicit validation checkpoints. The verification loop pattern includes error recovery (CONFLICTS handling) and the AUTO mode has clear iteration limits and promise tags for completion. | 3 / 3 |
Progressive Disclosure | Content is well-organized with clear sections, but everything is in one monolithic file. The detailed regex patterns, full JSON schemas, and report templates could be split into reference files. No external file references are provided. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (512 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
48ffaac
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.