CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-safla-neural

Agent skill for safla-neural - invoke with $agent-safla-neural

36

3.03x
Quality

0%

Does it follow best practices?

Impact

100%

3.03x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-safla-neural/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an extremely weak description that provides virtually no useful information for skill selection. It only names the skill and its invocation command without describing any capabilities, use cases, or trigger conditions. It is essentially a label rather than a description.

Suggestions

Add concrete actions describing what safla-neural actually does (e.g., 'Performs neural network analysis', 'Processes sensor data', etc.).

Add an explicit 'Use when...' clause with natural trigger terms that describe the situations or user requests that should activate this skill.

Replace the invocation instruction ('invoke with $agent-safla-neural') with functional content — invocation details belong in usage documentation, not in the selection-oriented description field.

DimensionReasoningScore

Specificity

The description contains no concrete actions whatsoever. 'Agent skill for safla-neural' is entirely vague and abstract, providing no information about what the skill actually does.

1 / 3

Completeness

The description fails to answer both 'what does this do' and 'when should Claude use it'. It only states it's an 'agent skill' and how to invoke it, with no functional or contextual information.

1 / 3

Trigger Term Quality

The only keyword is 'safla-neural', which is technical jargon that no user would naturally say when requesting a task. There are no natural language trigger terms present.

1 / 3

Distinctiveness Conflict Risk

While 'safla-neural' is a unique name, the description 'Agent skill' is completely generic and provides no distinguishing information about the skill's domain or purpose, making it impossible to differentiate from other agent skills.

1 / 3

Total

4

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a persona description masquerading as an actionable skill. It lists impressive-sounding capabilities and architectural concepts but provides zero concrete, executable guidance on how to accomplish any task. The code examples are non-functional pseudocode with undefined variables, and the entire content reads like a marketing brochure rather than a technical instruction set.

Suggestions

Replace the capability bullet list and memory architecture description with concrete, executable MCP tool invocations showing actual workflows (e.g., 'To create a persistent memory pattern: 1. Initialize with [exact command], 2. Store with [exact command], 3. Verify with [exact command]').

Remove all descriptive/marketing content (performance claims, abstract architecture diagrams) and replace with copy-paste-ready code examples using proper MCP tool call syntax with realistic parameter values.

Add a clear multi-step workflow with validation checkpoints for at least one primary use case (e.g., setting up a feedback loop, training a neural pattern, or managing cross-session memory).

Define what this skill should actually do when invoked—what inputs does it expect, what outputs does it produce, and what are the concrete steps to get from input to output.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive explanations of concepts Claude already knows (memory tiers, semantic understanding, episodic memory). The bullet-pointed capability list reads like marketing copy with specific but unsubstantiated performance claims (172,000+ ops/sec, 60% compression). Most of the content describes rather than instructs.

1 / 3

Actionability

The MCP integration examples use non-standard JavaScript-like syntax that isn't executable (no proper function call syntax, uses undefined variables like `interaction_context`, `result_metrics`). The four-tier memory model is purely descriptive with no concrete implementation steps. There's no guidance on what to actually do when invoked.

1 / 3

Workflow Clarity

There is no workflow, sequence, or process defined. The skill describes what the agent supposedly is but never explains how to accomplish any task. No validation steps, no error handling, no feedback loops despite claiming 'Feedback Loop Engineering' as a core capability.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files, no clear navigation structure, and no separation between overview and detailed content. The content that exists is all inline with no logical progression from simple to advanced usage.

1 / 3

Total

4

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/claude-flow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.