CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-tool-builder

Tools are how AI agents interact with the world. A well-designed tool is the difference between an agent that works and one that hallucinates, fails silently, or costs 10x more tokens than necessary. This skill covers tool design from schema to error handling.

50

Quality

39%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/antigravity-agent-tool-builder/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

14%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description reads like a course blurb or marketing copy rather than a functional skill description. It lacks concrete actions, has no 'Use when...' clause, and uses vague aspirational language ('the difference between an agent that works and one that hallucinates') that doesn't help Claude select this skill appropriately. The first sentence is entirely fluff and wastes space that could specify capabilities.

Suggestions

Replace the marketing-style opening with concrete actions, e.g., 'Designs tool schemas, writes tool descriptions, implements error handling and validation for AI agent tool-use interfaces.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about function calling, tool schemas, MCP tools, tool_use blocks, or designing tools for AI agents.'

Remove subjective claims like 'hallucinates, fails silently, or costs 10x more tokens' and replace with specific, distinguishing capabilities that separate this skill from general API or agent development skills.

DimensionReasoningScore

Specificity

The description uses vague, abstract language like 'interact with the world' and 'tool design from schema to error handling.' It does not list concrete actions the skill performs—it reads more like a marketing pitch than a capability description.

1 / 3

Completeness

The 'what' is vaguely stated ('covers tool design from schema to error handling') and there is no 'when' clause or explicit trigger guidance at all. The missing 'Use when...' clause caps this at 2 per the rubric, and the weak 'what' brings it to 1.

1 / 3

Trigger Term Quality

It includes some relevant terms like 'tool design', 'schema', 'error handling', 'AI agents', and 'tokens', but these are somewhat technical and miss common user phrasings like 'function calling', 'API tools', 'tool_use', or 'MCP tools'.

2 / 3

Distinctiveness Conflict Risk

The description is very broad—'tools', 'AI agents', 'error handling' could overlap with many other skills related to API design, agent development, or general coding best practices. There is no clear niche carved out.

1 / 3

Total

5

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a comprehensive and highly actionable skill with excellent concrete examples covering tool schema design, error handling, MCP, tool runners, and parallel execution. Its main weaknesses are verbosity from metadata-like sections (Capabilities, Scope, When to Use) that belong in frontmatter, a lack of a clear end-to-end workflow with validation checkpoints, and the monolithic structure that could benefit from splitting detailed implementations into referenced files.

Suggestions

Move 'Capabilities', 'Scope', 'When to Use', and 'Collaboration' sections to YAML frontmatter or a separate metadata file — they consume tokens without providing actionable guidance.

Add a clear end-to-end workflow section (e.g., '1. Define schema → 2. Write descriptions → 3. Validate schema → 4. Implement with error handling → 5. Test with LLM') with explicit validation checkpoints at each step.

Split lengthy implementation patterns (MCP server, Tool Runner) into separate referenced files (e.g., 'See [MCP_GUIDE.md](MCP_GUIDE.md) for full implementation') to improve progressive disclosure.

DimensionReasoningScore

Conciseness

The skill contains some unnecessary framing (e.g., the intro paragraph restating the description, the 'Capabilities' and 'Scope' sections that are metadata-like rather than instructional, and some commentary like 'Improves accuracy from 72% to 90%'). However, the core patterns and code examples are reasonably efficient. The overall length is substantial but much of it is useful code examples.

2 / 3

Actionability

The skill provides fully executable code examples across multiple languages (Python, TypeScript), concrete JSON schema examples with good/bad comparisons, complete MCP server implementations, tool runner patterns, and parallel execution handling. Examples are copy-paste ready with realistic data.

3 / 3

Workflow Clarity

While individual patterns are well-explained, the overall workflow of building a tool from scratch lacks a clear sequential process with validation checkpoints. The validation checks section lists rules but doesn't integrate them into a step-by-step build-validate-test workflow. Error handling is covered as a pattern but not as an explicit validation gate in a workflow.

2 / 3

Progressive Disclosure

The content is organized into clear sections (Patterns, Validation Checks, Collaboration), but it's a monolithic document with no references to external files for detailed content. The MCP server implementation and tool runner patterns are quite lengthy and could be split into separate reference files. The 'Capabilities', 'Scope', 'When to Use' sections feel like frontmatter that leaked into the body.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (711 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
boisenoise/skills-collections
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.