Agent skill for agent - invoke with $agent-agent
37
6%
Does it follow best practices?
Impact
93%
4.65xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-agent/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an extremely poor skill description that provides essentially no useful information. It fails on every dimension: it describes no concrete actions, includes no natural trigger terms, answers neither 'what' nor 'when', and is completely indistinguishable from any other agent-related skill.
Suggestions
Replace the entire description with concrete actions the skill performs (e.g., 'Spawns sub-agents to handle parallel tasks, delegates work across multiple contexts, and coordinates agent responses').
Add an explicit 'Use when...' clause with natural trigger terms that describe the situations where this skill should be selected.
Include domain-specific keywords and file types or task types to make the skill clearly distinguishable from other skills in a large skill library.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for agent' is entirely vague and abstract, providing no information about what the skill actually does. | 1 / 3 |
Completeness | Neither 'what does this do' nor 'when should Claude use it' is answered. There is no 'Use when...' clause and no description of functionality. | 1 / 3 |
Trigger Term Quality | The only keyword is 'agent', which is overly generic and not a natural term a user would say when needing a specific capability. The invocation syntax '$agent-agent' is not a natural trigger term. | 1 / 3 |
Distinctiveness Conflict Risk | The term 'agent' is extremely generic and would conflict with virtually any agent-related skill. There is nothing distinctive about this description. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
12%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is an extremely verbose, largely non-actionable document that reads more like a marketing whitepaper or architectural vision document than a practical skill file. The extensive code examples are pseudocode masquerading as executable implementations, with numerous undefined functions and inconsistent tool naming. The content would benefit enormously from being reduced to ~50 lines of actual actionable guidance with references to separate files for advanced topics.
Suggestions
Reduce the main skill to under 100 lines: a brief description, the actual MCP tool names with correct syntax, one small executable example, and links to separate files for advanced workflows.
Make code examples truly executable by either providing complete implementations or replacing pseudocode with clear step-by-step instructions using actual MCP tool call syntax.
Fix inconsistent MCP tool naming (mcp__claude_flow__ vs mcp__flow_nexus__, underscores vs hyphens) and verify all tool names match the actual available tools.
Split content into separate files: TOOLS.md for tool reference, EXAMPLES.md for usage examples, ADVANCED.md for multi-agent coordination and gaming AI patterns, keeping SKILL.md as a concise overview with navigation links.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at 500+ lines. Massive amounts of code that are essentially pseudocode/illustrative rather than executable. Explains concepts Claude already knows (what GOAP is, what A* search is, what behavior trees are). The marketing-style language ('cutting edge of AI-driven objective achievement') wastes tokens. Much of the content is aspirational description rather than actionable instruction. | 1 / 3 |
Actionability | Despite the volume of code, almost none of it is executable. Functions reference undefined helpers (buildConsensusMatrix, generatePreferenceVector, stateKey, canTransition, etc.), classes extend undefined base classes (GOAPAgent), and MCP tool calls use inconsistent naming (mcp__claude_flow__ vs mcp__flow_nexus__ vs mcp__sublinear_time_solver__ with underscores vs hyphens). The code is elaborate pseudocode dressed up as real implementations. | 1 / 3 |
Workflow Clarity | The numbered workflow sections (1-5) provide a logical sequence from state modeling through optimization. However, there are no validation checkpoints, no error recovery feedback loops in the main workflow, and no clear guidance on when to use which approach. The OODA loop in DynamicPlanner is conceptually clear but not practically executable. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. Everything is inlined in one massive document - the tool reference, workflow details, multiple usage examples, advanced configuration, gaming AI integration, and best practices. Content that should be in separate reference files (tool API, examples, advanced features) is all crammed into one file with no navigation structure. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (821 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
398f7c2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.