CtrlK
BlogDocsLog inGet started
Tessl Logo

common-session-retrospective

Analyze conversation corrections to detect skill gaps and auto-improve the skills library. Use after any session with user corrections, rework, or retrospective requests. After finding correction loops, also load +common/common-learning-log to persist mistake entries to AGENTS_LEARNING.md.

88

Quality

87%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description that clearly communicates both what the skill does and when to use it, with explicit trigger conditions. The trigger terms are natural and well-chosen for the meta-learning domain. The main weakness is that the specific actions (especially 'auto-improve the skills library') could be more concrete about what improvements look like.

DimensionReasoningScore

Specificity

Names the domain (conversation corrections, skill gaps) and some actions (analyze, detect, auto-improve, persist mistake entries), but the actions are somewhat abstract—'auto-improve the skills library' is vague about what concrete steps are taken.

2 / 3

Completeness

Clearly answers both what ('Analyze conversation corrections to detect skill gaps and auto-improve the skills library') and when ('Use after any session with user corrections, rework, or retrospective requests'), with explicit trigger guidance and even a follow-up action instruction.

3 / 3

Trigger Term Quality

Includes natural trigger terms users or the system would use: 'corrections', 'rework', 'retrospective', 'skill gaps', 'correction loops', 'mistake entries'. These cover the likely vocabulary around post-session improvement workflows.

3 / 3

Distinctiveness Conflict Risk

This is a highly specific niche—analyzing corrections to improve a skills library and persisting to AGENTS_LEARNING.md. It is unlikely to conflict with other skills due to its unique focus on meta-learning and self-improvement workflows.

3 / 3

Total

11

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-crafted skill that efficiently communicates a multi-step analytical protocol. Its strengths are excellent progressive disclosure, lean writing, and a clear workflow. The main weakness is that actionability could be improved with at least one concrete example of a correction signal detection and resulting proposal inline, rather than deferring all examples to the reference file.

Suggestions

Add one brief inline example showing a concrete correction loop detection and its resulting proposal (e.g., 'User corrected import path twice → Skill Missing → Create new skill for X')

Include a minimal example of the Report output format (correction count, skills changed, etc.) so Claude knows the expected output shape without needing to load methodology.md

DimensionReasoningScore

Conciseness

Every line serves a purpose. No unnecessary explanations of what retrospectives are or how corrections work. The content assumes Claude understands concepts like correction loops, lint rework, and skill gaps, and jumps straight into the protocol.

3 / 3

Actionability

The protocol steps are clear and specific (Extract, Classify, Propose, Implement, Log, Report), and guidelines like 'Cite specifics' and 'One fix per loop' are concrete. However, there are no executable code examples or copy-paste ready commands/templates — the actual signal tables, taxonomy, and report template are deferred to methodology.md, so the skill itself lacks fully concrete examples of what a correction signal looks like or what a proposal output should contain.

2 / 3

Workflow Clarity

The 7-step protocol is clearly sequenced with logical dependencies (Extract → Classify → Trigger Miss Check → Propose → Implement → Log → Report). Step 3 includes an explicit validation question ('Was a relevant skill available but not loaded?'), and step 6 includes a cross-reference to another protocol for logging. The workflow is well-defined for a non-destructive analytical process.

3 / 3

Progressive Disclosure

Excellent structure: the file tree is shown upfront, the SKILL.md contains the concise protocol and guidelines, and detailed content (signal tables, taxonomy, report template, trigger miss schema) is clearly referenced via one-level-deep links to references/methodology.md with specific anchors.

3 / 3

Total

11

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

metadata_field

'metadata' should map string keys to string values

Warning

Total

9

/

11

Passed

Repository
HoangNguyen0403/agent-skills-standard
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.