CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-automation-smart-agent

Agent skill for automation-smart-agent - invoke with $agent-automation-smart-agent

35

1.07x
Quality

0%

Does it follow best practices?

Impact

99%

1.07x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-automation-smart-agent/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an extremely weak description that provides essentially no useful information for skill selection. It only contains the skill's internal name and invocation command, with no explanation of capabilities, use cases, or trigger conditions. It would be nearly impossible for Claude to correctly select this skill from a pool of available skills.

Suggestions

Add a clear explanation of what the skill does by listing specific concrete actions (e.g., 'Automates browser interactions, fills web forms, scrapes data from websites' or whatever the actual capabilities are).

Add an explicit 'Use when...' clause with natural trigger terms that users would actually say when they need this skill.

Replace the generic 'automation-smart-agent' label with a meaningful description of the skill's domain and niche to distinguish it from other automation-related skills.

DimensionReasoningScore

Specificity

The description contains no concrete actions whatsoever. 'Agent skill for automation-smart-agent' is entirely vague and abstract, providing no information about what the skill actually does.

1 / 3

Completeness

Neither 'what does this do' nor 'when should Claude use it' is answered. The description only states the invocation command, with no explanation of capabilities or trigger conditions.

1 / 3

Trigger Term Quality

There are no natural keywords a user would say. 'automation-smart-agent' is technical jargon/an internal identifier, not something a user would naturally mention in a request.

1 / 3

Distinctiveness Conflict Risk

The term 'automation' is extremely generic and could overlap with virtually any automation-related skill. There is nothing distinctive about this description to differentiate it from other skills.

1 / 3

Total

4

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is entirely conceptual and aspirational, reading like a feature specification or marketing document rather than an actionable skill file. It describes what an intelligent agent coordinator would do in theory but provides zero concrete, executable guidance for Claude to follow. Every section uses abstract descriptions, pseudocode diagrams, and vague bullet points instead of real commands, APIs, or step-by-step instructions.

Suggestions

Replace all pseudocode blocks with actual executable commands or code that Claude can run — e.g., show the exact syntax for spawning an agent, the actual memory_store/memory_retrieve API calls, and real configuration examples.

Define a concrete step-by-step workflow: 1) Analyze task input, 2) Determine required capabilities (with specific matching logic), 3) Spawn agents (with exact commands), 4) Validate spawning succeeded, 5) Monitor and adjust — with explicit validation checkpoints.

Remove all conceptual explanations Claude already knows (ML model descriptions, generic best practices like 'Start Conservative', abstract diagrams) and replace with specific, novel instructions unique to this system.

Either add bundle files for advanced topics (ML integration, scaling strategies) and reference them from a concise overview, or dramatically reduce the content to only what's actionable in a single file.

DimensionReasoningScore

Conciseness

Extremely verbose and padded with abstract concepts Claude already understands. Sections like 'Intelligence Features', 'Machine Learning Integration', and 'Best Practices' describe high-level concepts without providing any actionable, novel information. The entire document reads like a product brochure rather than a skill instruction.

1 / 3

Actionability

No executable code or concrete commands anywhere. All code blocks are pseudocode or abstract diagrams (e.g., 'Task Requirements → Capability Analysis → Agent Selection'). The 'Usage Examples' section describes what should happen in natural language but never shows how to actually do it. There are no real commands, APIs, or copy-paste-ready instructions.

1 / 3

Workflow Clarity

No clear step-by-step workflow exists. The document describes conceptual processes (task analysis, capability matching, dynamic creation) but never sequences them into actionable steps with validation checkpoints. There are no feedback loops or error recovery procedures despite describing complex multi-agent coordination.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files and no bundle files to support it. All content is inline with no clear hierarchy or navigation structure. The document is over 150 lines of abstract description that could be dramatically restructured.

1 / 3

Total

4

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/claude-flow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.