Performs metacognitive task analysis and skill selection. Use when determining task complexity, selecting appropriate skills, or estimating work scale.
57
64%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/task-analyzer/SKILL.mdQuality
Discovery
52%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has good structural completeness with explicit 'what' and 'when' clauses, but suffers from abstract, jargon-heavy language that users would not naturally use. The capabilities described are meta-level and somewhat vague—'metacognitive task analysis' and 'estimating work scale' don't convey concrete, actionable operations. The skill would benefit from more natural trigger terms and more specific descriptions of what it actually produces or does.
Suggestions
Replace jargon like 'metacognitive task analysis' with natural language users might say, such as 'break down a complex task', 'figure out what steps are needed', or 'plan an approach'.
Add concrete actions or outputs the skill produces, e.g., 'Creates step-by-step task breakdowns, estimates effort levels, and recommends which tools or skills to apply'.
Include natural trigger terms users would actually use, such as 'how should I approach this', 'what's the best way to do this', 'plan', 'break this down', or 'where do I start'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain ('metacognitive task analysis and skill selection') and some actions ('determining task complexity, selecting appropriate skills, estimating work scale'), but these are fairly abstract and not concrete actions like 'extract text' or 'fill forms'. | 2 / 3 |
Completeness | The description explicitly answers both 'what' (performs metacognitive task analysis and skill selection) and 'when' (Use when determining task complexity, selecting appropriate skills, or estimating work scale), with a clear 'Use when...' clause. | 3 / 3 |
Trigger Term Quality | Terms like 'metacognitive task analysis' and 'skill selection' are internal/technical jargon that users would almost never naturally say. Users are unlikely to ask for 'metacognitive analysis' or 'work scale estimation'. | 1 / 3 |
Distinctiveness Conflict Risk | The meta-level nature of this skill (selecting other skills, analyzing tasks) is somewhat distinctive, but 'determining task complexity' and 'selecting appropriate skills' are broad enough to potentially overlap with planning, orchestration, or routing skills. | 2 / 3 |
Total | 8 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured metacognitive skill with clear decision tables and a concrete output format. Its main strengths are actionability (specific YAML output schema, explicit rules for skill selection) and workflow clarity (well-sequenced 5-step process). Weaknesses include some verbosity in explaining concepts Claude would naturally understand and inline content that could benefit from being split into reference files.
Suggestions
Trim the Surface Work → Fundamental Purpose table; Claude doesn't need examples like 'Fix this bug = Problem solving' — focus on the action directive instead.
Consider moving Warning Patterns and Implicit Relationships tables to a reference file to reduce the main skill's token footprint.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably structured with tables for quick scanning, but includes some redundancy (e.g., the Surface Work → Fundamental Purpose table explains concepts Claude already understands like 'Fix this bug = Problem solving'). The implicit relationships and warning patterns tables overlap significantly. Could be tightened. | 2 / 3 |
Actionability | Provides concrete, structured output format in YAML, specific tag-matching examples, clear decision tables for task type identification, scale estimation with file counts, and explicit rules (e.g., 'Scale >= Large → include documentation-criteria and implementation-approach'). The guidance is specific and directly executable. | 3 / 3 |
Workflow Clarity | The 5-step process is clearly sequenced (Understand → Estimate Scale → Identify Type → Tag Match → Implicit Relationships) with explicit actions at each step. The 'Action' callout in step 1 and scale-dependent branching in step 2 provide clear decision points. For a non-destructive analytical task, validation checkpoints aren't critical, and the workflow is unambiguous. | 3 / 3 |
Progressive Disclosure | References skills-index.yaml appropriately and notes that section selection happens after reading actual SKILL.md files. However, no bundle files were provided, so the referenced skills-index.yaml cannot be verified. The content is somewhat long (~100 lines of tables) and some sections like Warning Patterns and Implicit Relationships could potentially be split into a reference file. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
68ecb4a
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.