CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-safla-neural

Agent skill for safla-neural - invoke with $agent-safla-neural

36

3.03x
Quality

0%

Does it follow best practices?

Impact

100%

3.03x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-safla-neural/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an extremely weak description that fails on every dimension. It provides only a name and invocation command with zero information about what the skill does, when to use it, or what keywords should trigger its selection. It is essentially a placeholder rather than a functional description.

Suggestions

Add concrete actions describing what safla-neural actually does (e.g., 'Performs neural network analysis...', 'Processes embeddings...', etc.)

Add an explicit 'Use when...' clause with natural trigger terms that users would say when they need this skill's capabilities

Replace the generic 'Agent skill for' prefix with a specific capability summary that distinguishes this skill from others

DimensionReasoningScore

Specificity

The description contains no concrete actions whatsoever. 'Agent skill for safla-neural' is entirely vague—it doesn't describe what the skill does, only that it exists and how to invoke it.

1 / 3

Completeness

Neither 'what does this do' nor 'when should Claude use it' is answered. The description only provides an invocation command, with no explanation of functionality or usage triggers.

1 / 3

Trigger Term Quality

The only keyword is 'safla-neural', which is a technical/internal name unlikely to be used naturally by users. There are no natural language trigger terms that a user would say when needing this skill.

1 / 3

Distinctiveness Conflict Risk

The description is so generic ('Agent skill for...') that it provides no distinguishing information. Without knowing what the skill does, it's impossible to differentiate it from any other agent skill.

1 / 3

Total

4

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a marketing-style persona description rather than actionable technical guidance. It makes numerous impressive-sounding claims (172K ops/sec, 60% compression, quantum neural patterns) without any substantiation or executable instructions. The code examples use non-standard syntax with undefined variables, and the entire content describes what the agent supposedly is rather than what it should do or how to do it.

Suggestions

Replace the capability bullet list and four-tier memory description with concrete, executable code examples showing actual MCP tool invocations with proper syntax and realistic parameters.

Define a clear workflow: when this skill is invoked, what specific steps should be taken? Include a sequenced process with validation checkpoints (e.g., verify memory store succeeded before proceeding).

Remove unsubstantiated performance claims (172K ops/sec, 60% compression) and marketing language; focus on specific, actionable instructions Claude can follow.

Add references to external files for complex subtopics (safety frameworks, swarm coordination, memory compression strategies) rather than listing them as bullet points with no detail.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive explanations of concepts Claude already knows (memory tiers, semantic understanding, episodic memory). The bullet-pointed capability list reads like marketing copy with specific but unsubstantiated performance claims (172,000+ ops/sec, 60% compression). Most of the content describes rather than instructs.

1 / 3

Actionability

The MCP integration examples use non-standard JavaScript-like syntax that isn't executable (no function call syntax, no variable definitions for timestamp/interaction_context/result_metrics). The four-tier memory model is purely descriptive with no concrete implementation steps. There's no guidance on what to actually do when invoked.

1 / 3

Workflow Clarity

There is no workflow, sequence, or process defined. The skill describes capabilities and shows two disconnected code snippets with no sequencing, validation, or error handling. For a system involving neural training and memory persistence, the complete absence of validation checkpoints is a critical gap.

1 / 3

Progressive Disclosure

The content is a monolithic block mixing persona definition, capability lists, architecture descriptions, and code examples with no clear structure for navigation. No references to external files for detailed topics like safety frameworks, memory compression, or swarm coordination that are mentioned but never elaborated.

1 / 3

Total

4

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/claude-flow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.