CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-sona-learning-optimizer

Agent skill for sona-learning-optimizer - invoke with $agent-sona-learning-optimizer

23

Quality

3%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-sona-learning-optimizer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is essentially a placeholder that provides no useful information about the skill's purpose, capabilities, or appropriate usage context. It only states the invocation command, which is insufficient for Claude to make any informed selection decision. This is among the weakest possible descriptions.

Suggestions

Add a clear statement of what the skill does with concrete actions (e.g., 'Optimizes learning schedules, recommends study strategies, tracks knowledge retention').

Add an explicit 'Use when...' clause with natural trigger terms users would say (e.g., 'Use when the user asks about study plans, learning optimization, spaced repetition, or improving retention').

Remove the invocation syntax from the description field—it adds no value for skill selection—and replace it with domain-specific language that distinguishes this skill from others.

DimensionReasoningScore

Specificity

The description contains no concrete actions whatsoever. It only names itself ('sona-learning-optimizer') and provides an invocation command, with no indication of what the skill actually does.

1 / 3

Completeness

Neither 'what does this do' nor 'when should Claude use it' is answered. The description only provides an invocation syntax, which is not a substitute for either component.

1 / 3

Trigger Term Quality

There are no natural keywords a user would say. 'sona-learning-optimizer' is an internal tool name, not a term users would naturally use in requests. No domain-relevant trigger terms are present.

1 / 3

Distinctiveness Conflict Risk

The description is so vague that it provides no distinguishing information. Without knowing what the skill does, it cannot be reliably differentiated from any other skill.

1 / 3

Total

4

/

12

Passed

Implementation

7%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads like a marketing document or product README rather than an actionable skill for Claude. It is dominated by benchmark statistics, capability claims, and performance characteristics but provides almost no concrete instructions on how to actually perform tasks. The only executable content is two bash hook commands with minimal context.

Suggestions

Replace the capability descriptions and benchmark tables with a concrete step-by-step workflow showing how Claude should use SONA for a task (e.g., initialize -> execute -> learn -> validate).

Add executable code examples for the core operations (pattern retrieval, LoRA fine-tuning invocation, LLM routing) instead of just describing their existence.

Include validation checkpoints and error handling guidance, especially for the pre-task/post-task hooks (what happens if hooks fail, how to verify learning was recorded).

Remove marketing-style metrics and claims (e.g., '+55% quality improvement', '761 decisions/sec') that don't help Claude execute tasks, or move them to a separate reference file.

DimensionReasoningScore

Conciseness

The content is padded with marketing-style claims (+55% quality improvement, 761 decisions/sec, 2211 ops/sec) and benchmark statistics that don't help Claude execute any task. It explains what SONA is and its capabilities at length without providing actionable instructions. Much of the content reads like a product README rather than a skill.

1 / 3

Actionability

The only concrete commands are the two hook invocations (pre-task and post-task), which are minimal and lack context on when/how to use them. There are no executable code examples, no workflows for performing LoRA fine-tuning, pattern discovery, or LLM routing. The skill describes capabilities rather than instructing Claude on what to do.

1 / 3

Workflow Clarity

There is no clear multi-step workflow, no sequencing of operations, and no validation checkpoints. The two hook commands are presented in isolation without explaining how they fit into a task execution flow, what to do with outputs, or how to handle errors.

1 / 3

Progressive Disclosure

There is a reference to 'docs/RUVECTOR_SONA_INTEGRATION.md' and a package reference, which shows some attempt at progressive disclosure. However, no bundle files exist to support these references, and the main content is a monolithic listing of capabilities and benchmarks rather than a well-structured overview pointing to detailed materials.

2 / 3

Total

5

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/claude-flow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.