Use when creating new agents, editing existing agents, or defining specialized subagent roles for the Task tool
74
61%
Does it follow best practices?
Impact
97%
1.14xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/writing-agents/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is structured well as an explicit 'Use when...' clause, which clearly communicates trigger conditions. However, it lacks specificity about what concrete actions the skill performs (e.g., generating agent prompts, configuring tool access, setting agent parameters) and could benefit from more natural trigger terms that users would actually say. The domain is reasonably distinct but could be sharper.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Generates agent configuration, defines tool permissions, writes system prompts for subagents'
Include more natural trigger term variations such as 'multi-agent', 'orchestration', 'agent prompt', 'spawn subagent', or 'agent workflow'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (agents/subagents) and some actions (creating, editing, defining roles), but doesn't list specific concrete capabilities like what aspects of agents can be configured, what formats are involved, or what outputs are produced. | 2 / 3 |
Completeness | The description explicitly answers both 'what' (creating new agents, editing existing agents, defining specialized subagent roles) and 'when' (the entire description is framed as a 'Use when...' clause with clear trigger conditions). | 3 / 3 |
Trigger Term Quality | Includes relevant terms like 'agents', 'subagent', and 'Task tool', but misses common variations users might say such as 'multi-agent', 'orchestration', 'spawn agent', 'agent configuration', or 'agent prompt'. 'Task tool' is somewhat technical jargon that users may not naturally use. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'Task tool' and 'subagent roles' provides some distinctiveness, but 'creating new agents' and 'editing existing agents' could overlap with general coding skills or configuration management skills. The scope of 'agents' is somewhat ambiguous without more context. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
55%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is highly actionable with excellent workflow clarity and concrete examples, but suffers significantly from verbosity and lack of progressive disclosure. The content is roughly 3-4x longer than necessary, with redundant sections (anti-patterns explained twice), concepts Claude already knows (TDD basics, what personas are), and extensive inline examples that could be referenced from separate files. Trimming to ~100-120 lines with supporting reference files would dramatically improve token efficiency.
Suggestions
Cut content by 60-70%: remove the agents-vs-skills comparison table (Claude knows this), collapse redundant anti-patterns sections, remove model selection table (basic knowledge), and trim example searches to 1-2 domains instead of 4.
Extract common agent patterns (Specialist, Orchestrator, Reviewer) and detailed examples into a separate AGENT-PATTERNS.md reference file, keeping only a brief mention in the main skill.
Remove explanatory framing like 'Writing agents IS Test-Driven Development' and 'The persona is the agent's DNA' — these are motivational, not instructional, and waste tokens.
Consolidate the checklist and workflow sections — they largely duplicate each other. Keep the checklist as the single authoritative workflow reference.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | At ~350+ lines, this skill is extremely verbose. It explains concepts Claude already knows (what agents are, TDD cycles, table comparing agents vs skills), includes extensive examples that repeat similar patterns, and has significant redundancy between sections (anti-patterns appear both as a teaching section AND a dedicated anti-patterns-to-avoid section at the end). The comparison table, model selection table, and multiple pattern templates could be dramatically condensed. | 1 / 3 |
Actionability | The skill provides highly concrete, actionable guidance: specific file paths (.claude/agents/), exact YAML frontmatter format, complete markdown examples for personas/scope/anti-patterns, specific bash commands for exploration, and a detailed checklist. The examples are copy-paste ready and cover multiple agent patterns with real-world specificity. | 3 / 3 |
Workflow Clarity | The workflow is clearly sequenced (Research → Context → Write → Session Restart) with explicit steps within each phase. The RED-GREEN-REFACTOR testing cycle provides validation checkpoints. The comprehensive checklist at the end serves as a verification step. The session restart requirement is clearly flagged as a critical action. | 3 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to supporting files. All content—agent patterns, examples, anti-patterns, checklists, model selection guidance—is inlined in a single massive document. The comparison table, common agent patterns, and detailed examples could easily be split into referenced files. The references at the bottom are external URLs, not bundle files. | 1 / 3 |
Total | 8 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
0d67646
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.