Launch an intelligent sub-agent with automatic model selection based on task complexity, specialized agent matching, Zero-shot CoT reasoning, and mandatory self-critique verification
42
28%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/sadd/skills/launch-sub-agent/SKILL.mdQuality
Discovery
17%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description reads more like a feature list of an internal architecture than a user-facing skill description. It relies heavily on technical jargon (Zero-shot CoT, self-critique verification) that users would never naturally use, and completely lacks a 'Use when...' clause to guide skill selection. The core purpose—what this sub-agent actually accomplishes for the user—remains unclear.
Suggestions
Add an explicit 'Use when...' clause with natural trigger terms like 'delegate task', 'break down complex problem', 'run subtask', or 'need help with a difficult question'.
Replace technical jargon ('Zero-shot CoT reasoning', 'self-critique verification') with concrete descriptions of what the sub-agent does for the user, e.g., 'Delegates complex tasks to a sub-agent that reasons step-by-step and verifies its own answers.'
Clarify the specific use cases or task types this skill handles to distinguish it from other potential agent or task-management skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names some actions like 'model selection', 'specialized agent matching', 'Zero-shot CoT reasoning', and 'self-critique verification', but these are more architectural/technical concepts than concrete user-facing actions. It doesn't clearly describe what the sub-agent actually *does* for the user. | 2 / 3 |
Completeness | There is no 'Use when...' clause or equivalent explicit trigger guidance. The description only partially addresses 'what' (launch a sub-agent) but never addresses 'when' Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'what' is also weak, so this scores 1. | 1 / 3 |
Trigger Term Quality | The terms used ('Zero-shot CoT reasoning', 'self-critique verification', 'automatic model selection') are technical jargon that users would almost never naturally say. A user needing a sub-agent would more likely say things like 'run a task', 'delegate', 'complex problem', etc. | 1 / 3 |
Distinctiveness Conflict Risk | The concept of a 'sub-agent' is somewhat distinctive, but terms like 'task complexity' and 'specialized agent matching' are broad enough to potentially overlap with other agent-orchestration or task-delegation skills. It's not entirely generic but lacks clear niche triggers. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
39%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill has a well-structured multi-phase workflow with clear decision logic for model selection and agent matching, which is its primary strength. However, it is severely over-verbose, explaining concepts Claude already understands (CoT reasoning, context isolation, the orchestrator pattern) and embedding massive prompt templates inline that bloat the file. The lack of any bundle files or progressive disclosure means everything is crammed into one long document, and the actual tool dispatch syntax is left as pseudocode rather than being fully executable.
Suggestions
Reduce content by 60%+: Remove explanations of concepts Claude knows (what CoT is, what context isolation means, what the orchestrator pattern is) and trim the generic reasoning/critique templates to their essential unique elements.
Extract the CoT prefix template, critique suffix template, and specialized agent list into separate bundle files (e.g., COT_PREFIX.md, CRITIQUE_SUFFIX.md, AGENTS.md) and reference them from the main SKILL.md.
Show the exact Task tool invocation syntax with a complete, copy-paste-ready example instead of pseudocode like 'Use Task tool with Opus model, sdd:developer prompt'.
Consolidate the 4 examples into 1-2 that demonstrate the key decision points (e.g., one complex Opus task and one simple Haiku task) to reduce redundancy.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~250+ lines. It explains concepts Claude already knows (what Chain-of-Thought reasoning is, what context isolation means, what the Supervisor/Orchestrator pattern is). The Zero-shot CoT prefix template alone is ~20 lines of generic reasoning instructions that Claude inherently follows. The self-critique suffix is ~50 lines of boilerplate verification scaffolding. Much of this could be condensed to a fraction of its size. | 1 / 3 |
Actionability | The skill provides structured guidance with decision trees and model selection tables, which is useful. However, the actual dispatch step is pseudocode ('Use Task tool with...') rather than showing the exact tool invocation syntax. The specialized agent integration says to 'read the agent definition' without showing how. The CoT prefix and critique suffix are templates but are generic rather than task-specific executable patterns. | 2 / 3 |
Workflow Clarity | The 5-phase workflow is clearly sequenced (Task Analysis → Model Selection → Agent Matching → Prompt Construction → Dispatch) with explicit decision trees and validation through the mandatory self-critique loop. The feedback loop in the critique suffix (STOP → FIX → RE-VERIFY → DOCUMENT) is well-defined. The examples reinforce the workflow with concrete walkthroughs of each phase. | 3 / 3 |
Progressive Disclosure | The entire skill is a monolithic wall of text with no bundle files or external references. The lengthy CoT prefix template, critique suffix template, and detailed examples could all be split into separate referenced files. The inline prompt templates alone account for ~100 lines that would be better as referenced files, keeping the SKILL.md as a concise overview. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
dedca19
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.