CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-tool-builder

You are an expert in the interface between LLMs and the outside world. You've seen tools that work beautifully and tools that cause agents to hallucinate, loop, or fail silently. The difference is almost always in the design, not the implementation.

25

Quality

7%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/agent-tool-builder/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is a persona statement ('You are an expert...') rather than a functional skill description. It fails to specify any concrete actions, provides no trigger terms a user would naturally use, and completely lacks a 'Use when...' clause. It would be nearly impossible for Claude to correctly select this skill from a list of alternatives.

Suggestions

Replace the persona narrative with concrete actions the skill performs, e.g., 'Designs and reviews LLM tool schemas, validates tool definitions, and debugs agent tool-calling failures.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about designing tool schemas, fixing tool-calling loops, debugging agent tool use, or reviewing function definitions for LLM agents.'

Switch from second-person voice ('You are an expert') to third-person voice describing capabilities ('Reviews and improves LLM tool definitions...').

DimensionReasoningScore

Specificity

The description contains no concrete actions whatsoever. It uses abstract, narrative language about 'the interface between LLMs and the outside world' and 'tools that work beautifully' without specifying what the skill actually does.

1 / 3

Completeness

Neither 'what does this do' nor 'when should Claude use it' is answered. There is no 'Use when...' clause, no explicit triggers, and no description of capabilities. The entire text is a persona/expertise statement.

1 / 3

Trigger Term Quality

There are no natural user-facing trigger terms. Words like 'LLMs', 'hallucinate', 'loop', and 'fail silently' are technical jargon that users are unlikely to use when requesting help. The description reads like a persona introduction, not a skill selector.

1 / 3

Distinctiveness Conflict Risk

The description is extremely generic — 'tools', 'design', 'LLMs and the outside world' could apply to virtually any tool-building, API design, or agent-related skill. It provides no clear niche or distinguishing triggers.

1 / 3

Total

4

/

12

Passed

Implementation

14%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially an outline or stub with no actionable content. It lists topics (tool schema design, error handling, anti-patterns) but provides zero concrete guidance, examples, code, or workflows. The content appears truncated ('explicit error hand') and every pattern/anti-pattern section is an empty header, making it unusable as a skill file.

Suggestions

Add concrete, executable code examples for each pattern—e.g., a complete JSON Schema for a well-designed tool with description, input validation, and example usage.

Flesh out the anti-patterns with specific bad examples and their corrected versions, showing the before/after difference in tool descriptions.

Define a clear workflow for building a tool: 1) Design schema → 2) Write descriptions → 3) Add input examples → 4) Test with LLM → 5) Validate error responses, with explicit validation checkpoints.

Either add substantive content to each section or link to dedicated reference files (e.g., 'See [SCHEMA_GUIDE.md](SCHEMA_GUIDE.md) for complete schema design patterns') to enable progressive disclosure.

DimensionReasoningScore

Conciseness

The content is relatively short but wastes tokens on vague philosophical statements ('The LLM never sees your code') and empty section headers with no actual content. The 'Capabilities' list and 'When to Use' section add no actionable value.

2 / 3

Actionability

There is no concrete code, no executable examples, no specific commands, and no actual tool schemas shown. Every section is either a label or a vague description—nothing is copy-paste ready or instructive enough to act on.

1 / 3

Workflow Clarity

There is no workflow, no sequenced steps, and no validation checkpoints. The 'Patterns' and 'Anti-Patterns' sections are just headers with no content explaining what to do or in what order.

1 / 3

Progressive Disclosure

The content is a skeleton of headers with almost no substance. There are no references to deeper files, no linked resources, and the structure gives the illusion of organization without any actual content to disclose progressively.

1 / 3

Total

5

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
sickn33/antigravity-awesome-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.