CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-automation-smart-agent

Agent skill for automation-smart-agent - invoke with $agent-automation-smart-agent

35

1.07x
Quality

0%

Does it follow best practices?

Impact

99%

1.07x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-automation-smart-agent/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an extremely weak description that provides essentially no useful information for skill selection. It reads as a placeholder or auto-generated stub, containing only the skill's internal name and invocation command. It fails on every dimension: no concrete actions, no natural trigger terms, no 'what' or 'when' guidance, and no distinctive characteristics.

Suggestions

Replace the entire description with concrete actions the skill performs, e.g., 'Automates [specific tasks] by [specific methods]' instead of the generic 'Agent skill for automation-smart-agent'.

Add an explicit 'Use when...' clause with natural trigger terms that describe the scenarios and user requests that should activate this skill.

Remove the invocation command from the description (it belongs elsewhere) and instead use that space to describe the skill's unique capabilities and domain to distinguish it from other automation-related skills.

DimensionReasoningScore

Specificity

The description contains no concrete actions whatsoever. 'Agent skill for automation-smart-agent' is entirely vague and abstract, providing no information about what the skill actually does.

1 / 3

Completeness

The description fails to answer both 'what does this do' and 'when should Claude use it'. It only provides an invocation command ('$agent-automation-smart-agent') with no explanation of capabilities or usage triggers.

1 / 3

Trigger Term Quality

There are no natural keywords a user would say. 'automation-smart-agent' is a technical internal name, not a term users would naturally use in requests. No domain-specific or action-oriented trigger terms are present.

1 / 3

Distinctiveness Conflict Risk

The term 'automation' is extremely generic and could overlap with virtually any automation-related skill. There is nothing distinctive about this description that would help Claude differentiate it from other skills.

1 / 3

Total

4

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a high-level product specification or marketing document rather than an actionable skill for Claude. It is entirely abstract—describing what an intelligent agent coordinator would do conceptually without providing any concrete commands, APIs, code, or step-by-step instructions. The content is extremely verbose, explaining concepts Claude already understands (like what task classification or regression models are) while failing to provide any executable guidance.

Suggestions

Replace all pseudocode diagrams with actual executable commands or code that Claude can use to spawn, manage, and coordinate agents in the specific system this skill targets.

Define a concrete step-by-step workflow with validation checkpoints: e.g., 1) Analyze task → 2) Select agent type → 3) Spawn with specific command → 4) Verify agent is running → 5) Assign task.

Remove all conceptual explanations (what ML classification is, what predictive spawning means) and replace with specific tool invocations, command syntax, and concrete examples with expected outputs.

Cut the document by at least 60%—eliminate sections like 'Machine Learning Integration', 'Multi-Objective Optimization', and 'Best Practices' that provide no actionable guidance, or replace them with concrete implementation details.

DimensionReasoningScore

Conciseness

Extremely verbose and padded with conceptual descriptions Claude already understands. Sections like 'Intelligence Features', 'Machine Learning Integration', and 'Best Practices' describe abstract concepts without providing any executable or actionable content. The entire document reads like a product brochure rather than a skill instruction.

1 / 3

Actionability

No concrete, executable code or commands anywhere. All code blocks are pseudocode or abstract diagrams (e.g., 'Task Requirements → Capability Analysis → Agent Selection'). The 'Usage Examples' section describes what should happen in natural language but never shows how to actually do it. There are no real commands, APIs, or copy-paste-ready instructions.

1 / 3

Workflow Clarity

No clear step-by-step workflow exists. The document describes conceptual processes (task analysis, capability matching, dynamic creation) but never sequences them into actionable steps with validation checkpoints. For an automation skill involving agent spawning and resource management, the complete absence of validation or verification steps is a critical gap.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. All content is inline despite being over 150 lines, much of which is abstract description that could be separated. No navigation aids, no links to detailed references, and the flat structure makes it hard to find actionable information.

1 / 3

Total

4

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/claude-flow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.