CtrlK
BlogDocsLog inGet started
Tessl Logo

tool-design

This skill should be used when the user asks to "design agent tools", "create tool descriptions", "reduce tool complexity", "implement MCP tools", or mentions tool consolidation, architectural reduction, tool naming conventions, or agent-tool interfaces.

62

Quality

53%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/tool-design/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

72%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description excels at providing trigger terms and establishing a distinct niche around agent tool design and MCP tools. However, it is structured entirely as a 'Use when...' clause without first explaining what the skill actually does — what concrete actions it performs, what outputs it produces, or what methodology it follows. This makes it strong for skill selection but weak for setting expectations about the skill's capabilities.

Suggestions

Add a leading sentence describing concrete capabilities, e.g., 'Designs agent tool schemas, generates tool descriptions, consolidates overlapping tools, and defines naming conventions for MCP-based agent-tool interfaces.'

Restructure to follow the 'what it does + when to use it' pattern: lead with specific actions/outputs, then follow with the existing trigger clause.

DimensionReasoningScore

Specificity

The description names the domain (agent tools, MCP tools) and mentions some actions like 'design agent tools', 'create tool descriptions', 'reduce tool complexity', but these are listed as trigger phrases rather than concrete capabilities the skill performs. It doesn't clearly state what the skill actually does (e.g., 'Generates tool schemas', 'Refactors tool interfaces').

2 / 3

Completeness

The description has a strong 'when' clause with explicit triggers, but the 'what does this do' part is essentially missing. It tells Claude when to use the skill but never explains what the skill actually does or what outputs it produces. The 'what' is only implied through the trigger terms.

2 / 3

Trigger Term Quality

Good coverage of natural terms users would say: 'design agent tools', 'create tool descriptions', 'reduce tool complexity', 'implement MCP tools', 'tool consolidation', 'architectural reduction', 'tool naming conventions', 'agent-tool interfaces'. These are realistic phrases a user working in this domain would use.

3 / 3

Distinctiveness Conflict Risk

The description targets a clear niche — agent tool design, MCP tools, tool consolidation, and tool naming conventions. This is a specific enough domain that it's unlikely to conflict with other skills, as the trigger terms are highly specialized.

3 / 3

Total

10

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill covers tool design for agents comprehensively but suffers significantly from verbosity — it over-explains concepts Claude already knows (why vague descriptions are bad, what consolidation means) and restates core principles multiple times across sections. The actionability is moderate with some useful examples but too much conceptual framing. The content would benefit greatly from aggressive trimming to perhaps 40% of its current length, moving detailed explanations to reference files.

Suggestions

Cut content by at least 50% — remove explanations of why obvious anti-patterns are bad (e.g., the 10-line breakdown of why `def search(query)` is poor), eliminate repeated statements of the consolidation principle, and trust Claude to understand implications from concise statements.

Move the detailed subsections (Architectural Reduction, Tool Description Engineering, Response Format Optimization, Error Message Design) into separate reference files and keep only actionable summaries in the main skill file.

Replace the conceptual `optimize_tool_description` pseudocode with a concrete, executable example — either a real testing script or remove it in favor of a concise description of the feedback loop pattern.

Add explicit validation checkpoints to the Tool Selection Framework (e.g., 'Test: present 10 representative queries and verify each routes to exactly one tool with no ambiguity').

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~300+ lines, repeatedly explaining concepts Claude already understands (what tool descriptions are, why vague names are bad, how agents select tools). Many sections restate the same principles in slightly different ways (consolidation is explained multiple times). The 'Poor Tool Design' example spends 10+ lines explaining why `def search(query)` is bad — something Claude can infer instantly.

1 / 3

Actionability

The skill provides some concrete examples (well-designed tool docstring, MCP naming format, optimize_tool_description function) but much of the content is conceptual guidance rather than executable steps. The code examples are illustrative rather than copy-paste ready — the optimize_tool_description function calls a fictional `get_agent_response` and is more pseudocode than actionable code.

2 / 3

Workflow Clarity

The Tool Selection Framework provides a numbered sequence but lacks validation checkpoints or feedback loops for the design process itself. The 'Tool-Testing Agent Pattern' describes an iterative feedback loop conceptually but doesn't provide clear validation steps. For a skill involving tool design decisions that can significantly impact agent performance, explicit validation/testing checkpoints are missing.

2 / 3

Progressive Disclosure

The skill references external files (architectural_reduction.md, best_practices.md) with clear 'Read when' guidance, which is good. However, the main file itself is monolithic — the detailed topics section contains extensive inline content that could be split into separate reference files. The ratio of overview to inline detail is heavily skewed toward inline detail.

2 / 3

Total

7

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
muratcankoylan/Agent-Skills-for-Context-Engineering
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.