Full research pipeline: Workflow 1 (idea discovery) → implementation → Workflow 2 (auto review loop). Goes from a broad research direction all the way to a submission-ready paper. Use when user says "全流程", "full pipeline", "从找idea到投稿", "end-to-end research", or wants the complete autonomous research lifecycle.
79
76%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Critical
Do not install without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/research-pipeline/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong description that clearly communicates the skill's purpose as a full end-to-end research pipeline and provides explicit trigger guidance with both English and Chinese terms. Its main weakness is that the specific capabilities within the pipeline could be more granularly described (e.g., literature review, hypothesis generation, experiment design, paper writing). Overall it performs well for skill selection purposes.
Suggestions
Add more granular action descriptions within the pipeline stages, e.g., 'literature search, hypothesis generation, experiment implementation, paper drafting, iterative review' to improve specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (research pipeline) and references specific workflow stages (idea discovery, implementation, auto review loop), but the actions are described at a high level rather than listing multiple concrete granular actions like 'extract data', 'generate figures', etc. | 2 / 3 |
Completeness | Clearly answers both 'what' (full research pipeline from idea discovery through implementation to submission-ready paper) and 'when' (explicit 'Use when' clause with specific trigger phrases). Both dimensions are well-covered. | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms in both English and Chinese: '全流程', 'full pipeline', '从找idea到投稿', 'end-to-end research', and 'complete autonomous research lifecycle'. These cover multiple natural phrasings a user would actually say. | 3 / 3 |
Distinctiveness Conflict Risk | The description carves out a very specific niche — the full end-to-end research pipeline combining two named workflows. The trigger terms like '全流程', 'end-to-end research', and '从找idea到投稿' are highly distinctive and unlikely to conflict with skills covering only partial research tasks. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured orchestration skill with excellent workflow clarity — clear stages, gates, termination conditions, and error handling. Its main weaknesses are moderate verbosity (repeated explanations of AUTO_PROCEED behavior, overly detailed Gate 1 interaction options) and the abstract nature of Stage 2 implementation guidance which lacks concrete executable examples. The progressive disclosure is decent but the inline detail could be better managed.
Suggestions
Consolidate AUTO_PROCEED explanations — define behavior once in Constants and reference it elsewhere instead of re-explaining in Gate 1 and Key Rules.
Add a concrete code example or script template for Stage 2 (Implementation) to make it more actionable — e.g., a skeleton experiment script with argparse, seed control, and result saving.
Consider moving the detailed Gate 1 user interaction options (approve/pick/request changes/reject/stop) to a separate reference file to keep the main pipeline lean.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably well-structured but includes some unnecessary verbosity — e.g., the detailed explanation of what AUTO_PROCEED does is repeated multiple times (in Constants, Gate 1 description, and Key Rules). The 'sweet spot' tip and some of the Gate 1 user interaction options could be tightened. However, most content is substantive and pipeline-specific. | 2 / 3 |
Actionability | The skill provides concrete invocation commands (/idea-discovery, /run-experiment, /auto-review-loop, /monitor-experiment) and a clear final report template. However, Stage 2 (Implementation) is largely abstract guidance ('extend pilot code to full scale', 'follow existing codebase conventions') without executable examples. The self-review checklist is helpful but still high-level. | 2 / 3 |
Workflow Clarity | The pipeline is clearly sequenced across 5 stages with explicit gates (Gate 1 with AUTO_PROCEED logic), validation checkpoints (code self-review in Stage 2, review loop with score threshold in Stage 4), error recovery (fail gracefully rule, round cap at 4), and clear termination conditions. The feedback loop in Stage 4 is well-defined with explicit stop criteria. | 3 / 3 |
Progressive Disclosure | The skill references sub-skills (/idea-discovery, /auto-review-loop, /run-experiment) and a template file, which is good progressive disclosure. However, the Gate 1 section is quite long with inline detail about all possible user responses that could be summarized or linked out. The Constants section is well-structured but the body could better separate overview from detailed behavior. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
dc00dfb
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.