CtrlK
BlogDocsLog inGet started
Tessl Logo

create-agent

Comprehensive guide for creating Claude Code agents with proper structure, triggering conditions, system prompts, and validation - combines official Anthropic best practices with proven patterns

41

Quality

30%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/customaize-agent/skills/create-agent/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear domain (Claude Code agents) but reads more like a document subtitle than a skill description. It lacks concrete actions, explicit trigger conditions, and natural keyword variations that would help Claude reliably select this skill from a large pool. The phrase 'Comprehensive guide' is passive and doesn't communicate actionable capabilities.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to create, configure, or debug Claude Code sub-agents, agentic workflows, or multi-agent systems.'

Replace vague category words with concrete actions, e.g., 'Generates agent boilerplate, writes system prompts, configures tool permissions, and sets up validation loops for Claude Code agents.'

Include natural trigger term variations users would say, such as 'sub-agent', 'agentic workflow', 'multi-agent', 'agent orchestration', 'dispatch', or 'tool use'.

DimensionReasoningScore

Specificity

Names the domain ('Claude Code agents') and lists some aspects ('structure, triggering conditions, system prompts, validation'), but these are more like categories than concrete actions. It doesn't specify what the skill actually does (e.g., 'generates boilerplate code', 'writes system prompts', 'configures agent workflows').

2 / 3

Completeness

It describes 'what' at a high level (a guide for creating Claude Code agents) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also vague, so this scores a 1.

1 / 3

Trigger Term Quality

Includes relevant terms like 'Claude Code agents', 'system prompts', and 'Anthropic best practices', which users might mention. However, it misses common variations like 'sub-agent', 'agentic workflow', 'agent orchestration', 'multi-agent', or 'tool use' that users would naturally say.

2 / 3

Distinctiveness Conflict Risk

The mention of 'Claude Code agents' and 'Anthropic best practices' provides some distinctiveness, but 'system prompts' and 'validation' are broad enough to overlap with general prompt engineering or testing skills. The lack of explicit triggers increases conflict risk.

2 / 3

Total

7

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive in coverage but severely undermined by its verbosity—it reads more like a reference manual than an actionable skill for Claude. It contains redundant sections, explains concepts Claude already knows, and presents conflicting guidance (e.g., description length). The monolithic structure with no progressive disclosure means the entire ~500+ line document loads into context for every use, wasting significant token budget.

Suggestions

Reduce content by 60-70%: Remove 'What Are Agents?' section, the comparison table, basic naming convention explanations, and the AI-Assisted Agent Generation section. Keep only the file structure template, one production example, the creation process steps, and the validation checklist.

Extract production examples and triggering patterns into separate referenced files (e.g., examples/code-quality-reviewer.md, patterns/triggering.md) to enable progressive disclosure.

Resolve the contradiction between the detailed description guidance (200-1000 chars with 2-4 examples) and the 'Default Agent Standards' section (keep to ONE sentence, no verbose example blocks). Pick one approach and be consistent.

Provide the actual validate-agent.sh script or remove references to it, and add a feedback loop: what to do when validation fails, common errors and fixes.

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines. Explains concepts Claude already knows (what agents are, basic YAML frontmatter, kebab-case naming rules). Includes extensive tables, redundant examples, and sections like 'What Are Agents?' and 'AI-Assisted Agent Generation' that pad the content significantly. The 'Default Agent Standards' section partially contradicts earlier advice (e.g., description length guidance conflicts with 'keep to ONE sentence').

1 / 3

Actionability

Provides concrete file structure templates, example agent configurations, and a step-by-step creation process with bash commands. However, much of the content is descriptive rather than executable—the validation script is referenced but not provided, the 'Elite Agent Architect Process' is abstract, and the AI-assisted generation prompt is meta-guidance rather than directly actionable code.

2 / 3

Workflow Clarity

The 6-step 'Agent Creation Process' provides a clear sequence, and there's a quality checklist. However, the validation step references a script (validate-agent.sh) that isn't provided, there's no feedback loop for fixing validation failures, and the document presents multiple overlapping workflows (Agent Creation Process, Elite Agent Architect Process, Default Agent Standards) without clear reconciliation.

2 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files despite the content clearly warranting it. The production examples alone take up hundreds of lines that could be in separate files. No bundle files are provided, and the content doesn't reference any supporting documents for detailed topics like validation scripts, triggering patterns, or example agents.

1 / 3

Total

6

/

12

Passed

Validation

72%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation8 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (670 lines); consider splitting into references/ and linking

Warning

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

8

/

11

Passed

Repository
NeoLabHQ/context-engineering-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.