Decomposes complex user requests into executable subtasks, identifies required capabilities, searches for existing skills at skills.sh, and creates new skills when no solution exists. This skill should be used when the user submits a complex multi-step request, wants to automate workflows, or needs help breaking down large tasks into manageable pieces.
73
47%
Does it follow best practices?
Impact
90%
2.43xAverage score across 6 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/10e9928a/task-decomposer/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description adequately covers both what the skill does and when to use it, earning strong marks for completeness. However, the capabilities described are quite abstract and meta-level, making it hard to distinguish from general task planning or orchestration. The trigger terms are reasonable but could be more specific and natural to reduce conflict risk with other skills.
Suggestions
Add more natural trigger terms users would actually say, such as 'plan this out', 'step by step', 'orchestrate', 'pipeline', or 'I need to do multiple things'
Increase distinctiveness by emphasizing the unique aspect of searching skills.sh and skill creation/management, which differentiates this from generic task decomposition
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names several actions (decomposes requests, identifies capabilities, searches for skills, creates new skills) but they are somewhat abstract and meta-level rather than concrete domain-specific actions. 'Decomposes complex user requests into executable subtasks' is more of a process description than a concrete capability. | 2 / 3 |
Completeness | Clearly answers both 'what' (decomposes requests, identifies capabilities, searches for skills, creates new skills) and 'when' with an explicit trigger clause ('should be used when the user submits a complex multi-step request, wants to automate workflows, or needs help breaking down large tasks'). | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'complex multi-step request', 'automate workflows', 'breaking down large tasks', but these are fairly generic phrases. Missing more natural user language like 'plan', 'step by step', 'orchestrate', 'pipeline', or specific workflow terms users might actually say. | 2 / 3 |
Distinctiveness Conflict Risk | The description is meta-level (a skill about managing other skills) which gives it some distinctiveness, but terms like 'complex requests' and 'automate workflows' are quite broad and could overlap with many task-specific skills. The reference to 'skills.sh' adds some specificity but the overall scope is very wide. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is significantly over-engineered and verbose for what it does. It explains many concepts Claude already understands (task decomposition principles, capability taxonomies, generic skill templates) and includes extensive placeholder-heavy templates that add bulk without proportional value. The core actionable content — search skills.sh, decompose tasks, create skills when gaps exist — could be conveyed in roughly one-quarter of the current length.
Suggestions
Reduce content by 60-70%: Remove the Universal Capability Types table, Task Decomposition Principles section, and the elaborate ASCII output format template — Claude can generate appropriate formats without being told.
Move the skill creation template, capability taxonomy, and detailed examples into separate reference files (e.g., CAPABILITY_TYPES.md, SKILL_TEMPLATE.md, EXAMPLES.md) and link to them from the main file.
Add explicit validation/feedback loops: After skill search, what if no results? After skill creation, how to verify it works? After installation, how to confirm it's available? These should be concrete steps, not just a 'verify before proceeding' bullet.
Replace placeholder YAML blocks with a single concrete end-to-end example that shows actual commands and real outputs rather than {placeholder} syntax.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~350+ lines. The Universal Capability Types table, task decomposition principles (atomicity, independence, etc.), and the elaborate output format templates are things Claude already knows. The skill template section explains generic SKILL.md structure which is meta-knowledge Claude doesn't need repeated. Much of this could be condensed to under 100 lines. | 1 / 3 |
Actionability | The skill provides concrete CLI commands (npx skills find, npx skills add, npx skills init) which are actionable, and the YAML decomposition examples are helpful. However, much of the content is template/placeholder-heavy rather than truly executable — the YAML blocks use placeholder syntax like {skill-name} and the workflow is more of a conceptual framework than copy-paste-ready instructions. | 2 / 3 |
Workflow Clarity | The six phases are clearly sequenced and the overall flow is logical. However, validation checkpoints are weak — the verification section in the execution plan template is just placeholders, and there's no explicit feedback loop for when skill searches fail or skill creation encounters issues. The 'Verify before proceeding' best practice is mentioned but not integrated into the workflow steps. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with everything inline. The capability taxonomy table, skill template, output format template, and multiple full examples could all be split into separate reference files. There are no references to external files for detailed content — everything is crammed into one document. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
fca9ef2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.