Agent skill for sona-learning-optimizer - invoke with $agent-sona-learning-optimizer
23
3%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-sona-learning-optimizer/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially non-functional. It provides only the skill's internal name and invocation command, with zero information about capabilities, domain, use cases, or trigger conditions. Claude would have no basis for selecting this skill appropriately from a list of available skills.
Suggestions
Add a clear statement of what the skill does with specific concrete actions (e.g., 'Optimizes learning schedules, recommends study strategies, tracks knowledge retention').
Add an explicit 'Use when...' clause with natural trigger terms users would say (e.g., 'Use when the user asks about study plans, learning optimization, spaced repetition, or improving retention').
Remove the invocation syntax from the description field—it adds no selection value—and replace it with domain-specific language that distinguishes this skill from other agent skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. It only names itself ('sona-learning-optimizer') and provides an invocation command, with no indication of what the skill actually does. | 1 / 3 |
Completeness | Neither 'what does this do' nor 'when should Claude use it' is answered. The description only provides an invocation syntax, which is not a substitute for either. | 1 / 3 |
Trigger Term Quality | There are no natural keywords a user would say. 'sona-learning-optimizer' is an internal tool name, not a term users would naturally use in requests. No domain-relevant trigger terms are present. | 1 / 3 |
Distinctiveness Conflict Risk | The description is so vague that it provides no distinguishing information. Without knowing what the skill does, it cannot be reliably differentiated from any other skill. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
7%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads like a marketing brochure or README rather than an actionable skill for Claude. It is dominated by performance claims, benchmark numbers, and capability descriptions without providing concrete instructions on how to actually use the SONA learning optimizer. The only actionable content is two bash commands for hooks, which lack context, expected outputs, and error handling.
Suggestions
Replace the capability descriptions and benchmark statistics with a concrete step-by-step workflow showing how to initialize SONA, execute a task with learning, and verify the outcome.
Add executable code examples showing the full lifecycle: pattern retrieval, task execution with hooks, and how to inspect/validate learning outcomes.
Remove marketing-style claims and performance numbers that Claude cannot verify or act upon, and instead focus on specific commands, expected outputs, and error handling.
Define a clear validation step after post-task hooks to confirm learning was recorded successfully, including what to do if it fails.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is heavily padded with marketing-style claims ('+55% quality improvement', '761 decisions/sec', '99% parameter reduction') and explanations of concepts Claude already knows (what LoRA is, what EWC does). Much of the content describes capabilities rather than instructing on how to use them. The benchmark statistics and domain improvement percentages add little actionable value. | 1 / 3 |
Actionability | The skill provides almost no executable guidance. The only concrete commands are two bash hook invocations, but there's no context on when/how to use them, what the output looks like, or how to integrate them into a workflow. The rest is descriptive marketing copy about capabilities and performance numbers rather than instructions Claude can act on. | 1 / 3 |
Workflow Clarity | There is no clear workflow or sequenced process. The pre-task and post-task hooks hint at a workflow but lack any sequencing, validation, error handling, or explanation of what happens between them. For an agent that claims to do fine-tuning and continual learning, there's no guidance on the actual steps involved. | 1 / 3 |
Progressive Disclosure | There is a reference to an integration guide (docs/RUVECTOR_SONA_INTEGRATION.md) and a package reference, which is appropriate one-level-deep disclosure. However, the main content is poorly organized with capability descriptions that don't belong in the skill body, and it's unclear what the referenced documents actually contain. | 2 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
01070ed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.