CtrlK
BlogDocsLog inGet started
Tessl Logo

continual-learning

Orchestrate continual learning by delegating transcript mining and AGENTS.md updates to `agents-memory-updater`.

46

Quality

33%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./continual-learning/skills/continual-learning/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

17%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is overly technical and internally-focused, using jargon that wouldn't match natural user requests. It lacks an explicit 'Use when...' clause and the concrete actions are described at a meta-orchestration level rather than specifying tangible outcomes. The mention of specific tools (agents-memory-updater, AGENTS.md) provides some distinctiveness but doesn't compensate for poor trigger coverage and missing usage guidance.

Suggestions

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user wants to update agent memory, learn from conversation transcripts, or refresh AGENTS.md with new knowledge.'

Replace abstract orchestration language with concrete actions, e.g., 'Extracts key learnings from conversation transcripts and updates AGENTS.md with new patterns, preferences, and instructions.'

Include natural user-facing keywords like 'update memory', 'learn from conversations', 'save preferences', 'agent configuration' to improve trigger term coverage.

DimensionReasoningScore

Specificity

Names a domain (continual learning) and some actions (delegating transcript mining, AGENTS.md updates), but the actions are somewhat abstract—'orchestrate' and 'delegating' are meta-level rather than concrete end-user-facing capabilities.

2 / 3

Completeness

Describes a rough 'what' (orchestrate learning via delegation) but has no 'Use when...' clause or equivalent explicit trigger guidance, which per the rubric caps completeness at 2, and the 'what' itself is vague enough to warrant a 1.

1 / 3

Trigger Term Quality

Uses technical/internal jargon like 'orchestrate continual learning', 'transcript mining', 'agents-memory-updater', and 'AGENTS.md' which are not terms a user would naturally say when requesting this functionality.

1 / 3

Distinctiveness Conflict Risk

References specific artifacts like 'agents-memory-updater' and 'AGENTS.md' which provide some distinctiveness, but the broader framing of 'continual learning' and 'transcript mining' could overlap with other memory or knowledge management skills.

2 / 3

Total

6

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is admirably concise and well-structured as a thin orchestration layer, but it sacrifices actionability by providing almost no concrete detail on how to invoke the subagent — no tool call syntax, parameters, or expected response format. The guardrails are useful constraints, but the lack of error handling and missing navigable references to the subagent weaken the overall utility.

Suggestions

Add a concrete example of how to invoke `agents-memory-updater` (e.g., the exact tool call or dispatch syntax with any required parameters).

Include basic error handling: what to do if the subagent call fails or returns unexpected results.

Add an explicit link/path to the `agents-memory-updater` skill definition so Claude can navigate to it (e.g., `See [agents-memory-updater/SKILL.md](../agents-memory-updater/SKILL.md)`).

DimensionReasoningScore

Conciseness

Extremely lean — every line serves a purpose. No unnecessary explanation of what subagents are or how AGENTS.md works. Assumes Claude's competence throughout.

3 / 3

Actionability

The workflow is essentially 'call agents-memory-updater and return the result' with no concrete details on how to invoke the subagent (tool call syntax, parameters, expected input/output). This is vague direction rather than executable guidance.

1 / 3

Workflow Clarity

The two-step sequence is clear and the guardrails provide useful constraints, but there is no validation or error handling — what happens if the subagent fails? For an orchestration skill that delegates to a subagent, some feedback loop or success/failure check would be expected.

2 / 3

Progressive Disclosure

The skill references `agents-memory-updater` as a subagent but provides no link or path to its definition. With no bundle files provided, there's no way to navigate to the referenced subagent's details. A clear reference (e.g., 'See [agents-memory-updater/SKILL.md]') would improve discoverability.

2 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
cursor/plugins
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.