Automatic agent selection and intelligent task routing. Analyzes user requests and automatically selects the best specialist agent(s) without requiring explicit user mentions.
38
22%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agent/skills/intelligent-routing/SKILL.mdQuality
Discovery
17%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description suffers from technical jargon that users wouldn't naturally use and lacks explicit trigger guidance for when to apply the skill. While it attempts to describe the capability, the abstract language ('intelligent task routing', 'specialist agents') doesn't provide concrete actions or natural keywords that would help Claude reliably select this skill.
Suggestions
Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user has a complex request that could benefit from multiple specialized capabilities or when no specific tool is mentioned.'
Replace technical jargon with natural user language - instead of 'specialist agent(s)', describe what users would actually say like 'help me with...', 'I need to...', or specific task types.
List concrete examples of the types of tasks or requests this skill handles to improve specificity and help distinguish it from other orchestration-type skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (agent selection, task routing) and describes actions (analyzes requests, selects agents), but lacks concrete specifics about what types of tasks, which agents, or what 'intelligent routing' actually entails. | 2 / 3 |
Completeness | Describes what it does (analyzes and routes to agents) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Uses technical jargon like 'agent selection', 'task routing', and 'specialist agent(s)' that users would not naturally say. Missing natural trigger terms users might actually use when needing this functionality. | 1 / 3 |
Distinctiveness Conflict Risk | The concept of 'agent selection' is somewhat specific, but 'task routing' and 'analyzes user requests' are generic enough to potentially overlap with orchestration, delegation, or general task management skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill suffers from severe verbosity, repeating the same routing concepts across multiple formats (tables, flowcharts, code, examples) without adding new information. While the agent selection matrices provide useful reference material, the document lacks executable implementation details and could be condensed significantly. The monolithic structure makes it difficult to navigate and maintain.
Suggestions
Reduce content by 60%+ by eliminating redundant explanations - keep one agent selection table and remove duplicate presentations in flowcharts and pseudo-code
Split into multiple files: main SKILL.md with core routing logic, separate AGENTS.md for agent definitions, and DEBUG.md for testing/debugging
Replace pseudo-code with concrete, actionable instructions for how Claude should analyze requests (e.g., specific keyword matching rules rather than abstract 'classifyRequest' functions)
Add explicit validation steps: what to do when agent selection produces poor results, how to detect misrouting, and recovery procedures
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with excessive repetition. The same concepts (agent selection, routing) are explained multiple times in different formats (tables, flowcharts, pseudo-code, examples). Contains unnecessary meta-commentary like 'Testing the System' and 'User Education' sections that add little value. Could be reduced by 60-70%. | 1 / 3 |
Actionability | Provides concrete agent selection matrices and keyword mappings which are actionable. However, the pseudo-code is not executable, the mermaid diagram is illustrative rather than functional, and the actual implementation details for how Claude should perform this routing are abstract rather than concrete instructions. | 2 / 3 |
Workflow Clarity | The workflow is presented (analyze → detect domains → assess complexity → select agent) but lacks explicit validation checkpoints. No feedback loops for when agent selection fails or produces poor results. The 'Edge Cases' section helps but doesn't provide recovery steps. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. All content is inline despite being over 200 lines. The document could benefit from splitting agent definitions, test cases, and debug instructions into separate files. References to 'GEMINI.md' exist but the relationship is unclear. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
7114206
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.