Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies
37
21%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/dispatching-parallel-agents/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description only provides a vague trigger condition without ever stating what the skill does. It lacks concrete actions, natural user-facing keywords, and a clear 'what' component. A user or Claude selecting from many skills would struggle to understand the skill's purpose or when to prefer it over alternatives.
Suggestions
Add a clear 'what' statement describing the concrete action, e.g., 'Executes multiple tasks in parallel using concurrent subagents' or 'Runs independent subtasks simultaneously to speed up workflows'.
Include natural trigger terms users would actually say, such as 'parallel', 'at the same time', 'concurrently', 'multiple tasks', 'batch', 'speed up'.
Provide examples of task types this applies to, e.g., 'Use when the user asks to run linting and tests simultaneously, or process multiple files independently.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description does not list any concrete actions or capabilities. It describes a condition for use but never says what the skill actually does — no verbs like 'runs', 'executes', 'parallelizes', etc. | 1 / 3 |
Completeness | The 'when' is partially addressed ('Use when facing 2+ independent tasks...'), but the 'what' — what the skill actually does — is entirely missing. Without knowing what it does, the description is fundamentally incomplete. | 1 / 3 |
Trigger Term Quality | There are no natural keywords a user would say. Terms like 'independent tasks', 'shared state', and 'sequential dependencies' are abstract/technical jargon, not phrases users naturally use in requests. | 1 / 3 |
Distinctiveness Conflict Risk | The concept of parallel/independent task execution is somewhat distinctive, but without naming the mechanism (e.g., parallel tool calls, subagents, concurrent execution), it could overlap with any multi-step or task-management skill. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill covers a useful pattern with a good concrete example (the agent prompt template), but suffers from significant redundancy — the 'When NOT to Use' section duplicates 'Don't use when', 'Real-World Impact' duplicates the earlier example, and 'Key Benefits' restates the obvious. The actionability is moderate: the dispatch pattern is shown but with pseudocode rather than actual tool calls. The content would benefit greatly from being cut in half.
Suggestions
Remove duplicate sections: merge 'When NOT to Use' into the earlier 'Don't use when' list, remove 'Real-World Impact' (already covered by 'Real Example'), and remove 'Key Benefits' (self-evident from the pattern description).
Replace the pseudocode `Task()` calls with actual tool invocation syntax or the specific API/tool Claude should use for dispatching parallel agents.
Add an explicit feedback loop in the verification section: what to do when agents' changes conflict or when the integrated result fails tests (e.g., dispatch a new agent to resolve the conflict).
Remove the dot graph — the decision logic is simple enough that the bullet lists already convey it clearly, and the graph syntax wastes tokens.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Significant verbosity throughout. The 'Key Benefits' section restates what's already obvious. 'When NOT to Use' repeats the 'Don't use when' section nearly verbatim. The 'Real-World Impact' section duplicates the 'Real Example from Session' section. The dot graph is unnecessary overhead. Multiple sections explain concepts Claude already understands (what parallel execution is, why focus helps). | 1 / 3 |
Actionability | The agent prompt structure example is concrete and useful, and the TypeScript dispatch snippet shows the pattern. However, the Task() calls are pseudocode rather than actual tool invocations, and there's no specification of which tool to use or exact API. The guidance is more conceptual pattern than executable instruction. | 2 / 3 |
Workflow Clarity | The 4-step pattern (Identify → Create → Dispatch → Review) is clearly sequenced, and the verification section exists. However, there's no explicit feedback loop for when agent results conflict or when the full test suite fails after integration — what do you do then? For a workflow involving potentially conflicting code changes, the lack of a conflict resolution/retry loop is a gap. | 2 / 3 |
Progressive Disclosure | Content is structured with clear headers and sections, but it's entirely monolithic — everything is in one file with no references to external resources. The real example, common mistakes, and detailed prompt structure could be split into separate files. The content is ~150 lines when it could be ~60 lines with references. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
6efe32c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.