Performs metacognitive task analysis and skill selection. Use when determining task complexity, selecting appropriate skills, or estimating work scale.
69
61%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/task-analyzer/SKILL.mdQuality
Discovery
52%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has good structural completeness with an explicit 'Use when...' clause, but suffers from abstract, jargon-heavy language that users would not naturally use. The capabilities described are meta-level and somewhat vague—'metacognitive task analysis' and 'estimating work scale' don't convey concrete, actionable operations. The skill would benefit from more natural trigger terms and more specific descriptions of what it actually does.
Suggestions
Replace jargon like 'metacognitive task analysis' with natural language users might say, such as 'break down a complex task', 'figure out what steps are needed', or 'plan an approach'.
Add concrete examples of what the skill produces or decides, e.g., 'Breaks complex requests into subtasks, determines which tools or skills to apply, and estimates effort. Use when planning how to approach a multi-step task or when deciding between different methods.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain ('metacognitive task analysis and skill selection') and some actions ('determining task complexity, selecting appropriate skills, estimating work scale'), but these are fairly abstract and not concrete actions like 'extract text' or 'fill forms'. | 2 / 3 |
Completeness | The description explicitly answers both 'what' (performs metacognitive task analysis and skill selection) and 'when' (Use when determining task complexity, selecting appropriate skills, or estimating work scale) with a clear 'Use when...' clause. | 3 / 3 |
Trigger Term Quality | Terms like 'metacognitive task analysis' and 'skill selection' are internal/technical jargon that users would almost never naturally say. Users are unlikely to ask for 'metacognitive analysis' or 'work scale estimation'. | 1 / 3 |
Distinctiveness Conflict Risk | The meta-level nature of this skill (selecting other skills, analyzing tasks) is somewhat distinctive, but 'determining task complexity' and 'selecting appropriate skills' are broad enough to potentially overlap with planning, orchestration, or routing skills. | 2 / 3 |
Total | 8 / 12 Passed |
Implementation
70%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured metacognitive skill that provides a clear analytical framework for task analysis and skill selection. Its main strength is workflow clarity and progressive disclosure, with clean tables and a defined output schema. Its weaknesses are moderate verbosity in explaining concepts Claude already understands (like mapping surface tasks to fundamental purposes) and the lack of truly executable/concrete implementation details for the matching process.
Suggestions
Trim the 'Understand Task Essence' table and key questions—Claude already knows how to identify fundamental purposes behind tasks; focus on project-specific mappings instead.
Add a concrete example showing the full end-to-end analysis (input task description → complete output YAML) to make the skill more actionable and copy-paste ready.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably structured with tables for efficiency, but includes some unnecessary explanatory content that Claude would already know (e.g., explaining what 'Fix this bug' fundamentally means, or that larger scale means process skills are more important). Some tables add value but others state the obvious. | 2 / 3 |
Actionability | Provides a structured process with YAML output format examples and tag-matching examples, which is helpful. However, the guidance is more of a conceptual framework than executable steps—there are no concrete commands or code to run, and the skill matching relies on an external YAML file without showing how to actually parse or use it programmatically. | 2 / 3 |
Workflow Clarity | The multi-step process is clearly numbered (1-5) with a logical sequence from understanding task essence through to skill matching. Each step has clear inputs and outputs, and the output format provides a concrete schema. The workflow is well-sequenced for a cognitive/analytical task that doesn't involve destructive operations. | 3 / 3 |
Progressive Disclosure | Cleanly references the external skills-index.yaml for detailed metadata, keeps the SKILL.md as an overview of the analysis process, and explicitly notes that section selection happens after reading actual SKILL.md files. One-level-deep reference that is clearly signaled. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
2e719be
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.