This skill should be used when the user asks to "design agent tools", "create tool descriptions", "reduce tool complexity", "implement MCP tools", or mentions tool consolidation, architectural reduction, tool naming conventions, or agent-tool interfaces.
73
66%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/tool-design/SKILL.mdQuality
Discovery
47%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a trigger-term list masquerading as a skill description. While it excels at providing keywords users might say, it completely fails to explain what the skill actually does - what actions it performs, what outputs it creates, or what problems it solves. The description inverts the expected structure by only addressing 'when to use' without explaining 'what it does'.
Suggestions
Add a clear 'what' statement at the beginning describing concrete actions (e.g., 'Designs and documents agent tool interfaces, creates tool schemas, and optimizes tool architectures for MCP-compatible systems.')
Restructure to lead with capabilities, then follow with 'Use when...' clause containing the trigger terms
Include specific outputs or deliverables the skill produces (e.g., 'generates tool specifications, API contracts, or consolidation recommendations')
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (agent tools, MCP tools) and mentions some actions like 'design', 'create', 'reduce', 'implement', but doesn't list concrete specific actions or outputs. The description focuses more on trigger terms than explaining what the skill actually does. | 2 / 3 |
Completeness | The description only addresses 'when' (trigger conditions) but completely omits 'what' - there's no explanation of what the skill actually does, what outputs it produces, or what capabilities it provides. It's essentially just a list of trigger phrases. | 1 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'design agent tools', 'create tool descriptions', 'reduce tool complexity', 'implement MCP tools', 'tool consolidation', 'architectural reduction', 'tool naming conventions', 'agent-tool interfaces'. These are specific and varied. | 3 / 3 |
Distinctiveness Conflict Risk | The focus on 'agent tools' and 'MCP tools' provides some distinctiveness, but terms like 'tool descriptions' and 'naming conventions' could overlap with general documentation or coding style skills. The niche is somewhat clear but not fully defined. | 2 / 3 |
Total | 8 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, well-structured skill that provides actionable guidance for designing agent tools. The concrete code examples, clear anti-patterns, and comprehensive gotchas section make it immediately useful. The main weakness is some verbosity in explaining concepts that Claude would already understand, which could be trimmed to improve token efficiency.
Suggestions
Trim explanatory passages like 'Why Consolidation Works' that explain reasoning Claude can infer, keeping only the actionable guidance and examples
Consider moving the lengthy 'Detailed Topics' subsections to reference files, keeping only summaries in the main skill body
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is comprehensive but includes some verbose explanations that Claude would already understand (e.g., explaining why consolidation works, general API design concepts). Some sections could be tightened while preserving the actionable content. | 2 / 3 |
Actionability | Provides concrete, executable code examples (tool definitions with full docstrings, the optimize_tool_description function), specific naming conventions (CUST-###### format, ServerName:tool_name), and copy-paste ready patterns. The examples clearly demonstrate both good and bad practices. | 3 / 3 |
Workflow Clarity | Clear sequences are provided throughout: the Tool Selection Framework has explicit numbered steps, the Tool-Testing Agent Pattern shows a clear feedback loop process, and the description structure answers four specific questions in order. Validation is addressed through the testing criteria and error recovery guidance. | 3 / 3 |
Progressive Disclosure | Well-organized with clear sections (Core Concepts → Detailed Topics → Practical Guidance → Examples → Guidelines → Gotchas). References to external files (best_practices.md, architectural_reduction.md) are one level deep with clear 'Read when' guidance. Content is appropriately split between overview and detailed topics. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
3ab8c94
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.