CtrlK
BlogDocsLog inGet started
Tessl Logo

experiment-queue

SSH job queue for multi-seed/multi-config ML experiments with OOM-aware retry, stale-screen cleanup, and wave-transition race prevention. Use when user says "batch experiments", "队列实验", "run grid", "multi-seed sweep", "auto-chain experiments", or when /run-experiment is insufficient for 10+ jobs that need orchestration.

83

Quality

81%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines a specific niche (SSH-based ML experiment queue orchestration), lists concrete technical capabilities, and provides explicit trigger terms in multiple languages. The description also smartly delineates its boundary with a simpler skill (/run-experiment) by specifying the 10+ job threshold, making skill selection unambiguous.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: SSH job queue, multi-seed/multi-config ML experiments, OOM-aware retry, stale-screen cleanup, and wave-transition race prevention. These are highly specific capabilities.

3 / 3

Completeness

Clearly answers both 'what' (SSH job queue for ML experiments with OOM-aware retry, stale-screen cleanup, wave-transition race prevention) and 'when' (explicit 'Use when' clause with specific trigger phrases and a clear threshold of '10+ jobs that need orchestration').

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including 'batch experiments', '队列实验' (Chinese), 'run grid', 'multi-seed sweep', 'auto-chain experiments', and even references the boundary with '/run-experiment' for 10+ jobs. These are terms users would naturally say.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a very specific niche: SSH-based ML experiment orchestration with particular features like OOM-aware retry and wave-transition race prevention. The explicit boundary with '/run-experiment' for smaller jobs further reduces conflict risk.

3 / 3

Total

12

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured orchestration skill with excellent workflow clarity, clear state machines, and concrete YAML/command examples. Its main weaknesses are verbosity (rationale sections, comparison tables, backstory) and the absence of the bundled tools it critically depends on (queue_manager.py, build_manifest.py), which undermines actionability. The content would benefit from trimming non-actionable sections and either providing or clearly documenting the missing bundle files.

Suggestions

Remove or drastically shorten the 'Why This Exists', 'Rationale / Source', and comparison table sections—these explain motivation Claude doesn't need and consume significant tokens.

Provide the referenced bundle files (tools/queue_manager.py, tools/build_manifest.py) or at minimum include the core scheduler loop as executable code rather than prose descriptions of what it does.

Split detailed reference content (Grid Spec Syntax, Wave Chaining YAML, OOM Handling details) into separate referenced files to reduce the monolithic body size.

DimensionReasoningScore

Conciseness

The skill is quite long (~300 lines) and includes sections like 'Why This Exists' with rationale/post-mortem context, a comparison table with /run-experiment, and 'Rationale / Source' that don't add actionable value. The 'Known Failure Modes' and core workflow sections are efficient, but overall there's significant content that could be trimmed—Claude doesn't need the backstory of a 2026-04-16 audit or explanations of why engineering friction exists.

2 / 3

Actionability

The skill provides concrete YAML manifest formats, regex patterns for OOM detection, and SSH commands for launching/monitoring. However, the actual scheduler implementation (queue_manager.py) is referenced as 'bundled' but not provided, and no bundle files exist. The workflow relies heavily on tools that aren't present, making it not truly executable. The grid spec and wave chaining examples are well-structured but remain declarative specifications rather than runnable code.

2 / 3

Workflow Clarity

The 5-step workflow (Parse → Pre-flight → Launch → Monitor → Post-completion) is clearly sequenced with explicit validation at each stage. The job state machine is well-defined with transitions, the wave transition has 4 explicit preconditions before proceeding, OOM handling has a clear retry loop with bounded attempts, and stale screen detection has explicit decision logic. The resume-on-restart section addresses crash recovery.

3 / 3

Progressive Disclosure

The skill references bundled tools (queue_manager.py, build_manifest.py) and related skills (/run-experiment, /monitor-experiment, /analyze-results) but no bundle files are provided. The content is largely monolithic—the YAML manifest format, grid spec syntax, wave chaining, OOM handling, and stale screen detection could be split into separate reference files. The 'See Also' section is good but the main body tries to cover everything inline.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
wanshuiyin/Auto-claude-code-research-in-sleep
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.