Propose additions to project CLAUDE.md based on session learnings
67
55%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/claude-code-dev/skills/propose-project-learning/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a specific target (CLAUDE.md) but is too terse to effectively guide skill selection. It lacks explicit trigger conditions and doesn't explain what types of learnings or additions are relevant, making it difficult for Claude to know when to choose this skill over others.
Suggestions
Add a 'Use when...' clause with explicit triggers like 'when the user wants to save project preferences', 'update project documentation', or 'remember conventions for future sessions'
Specify what kinds of additions are proposed (e.g., 'coding conventions, tool preferences, workflow patterns, project-specific instructions')
Include natural trigger terms users might say such as 'save this preference', 'remember for next time', 'update project config'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names a specific action ('Propose additions to project CLAUDE.md') and context ('session learnings'), but doesn't elaborate on what kinds of additions or what constitutes learnings. | 2 / 3 |
Completeness | Describes what it does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Includes 'CLAUDE.md' and 'session learnings' which are somewhat relevant, but misses natural user phrases like 'update project docs', 'save what we learned', 'remember this', or 'project configuration'. | 2 / 3 |
Distinctiveness Conflict Risk | 'CLAUDE.md' is fairly specific, but 'session learnings' is vague and could overlap with documentation, note-taking, or memory-related skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill with strong actionability and clear workflow. The execution instructions are concrete and sequenced properly with validation steps (checking existing CLAUDE.md). Minor improvements could be made by trimming redundant examples and potentially extracting the proposal template format to a separate reference file.
Suggestions
Remove the 'Poor Learnings' example section - Claude already knows what generic advice looks like, and the 'Quality Criteria' table makes this redundant
Consider extracting the detailed proposal markdown template to a separate PROPOSAL_TEMPLATE.md file to reduce inline content length
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some redundancy - the examples section repeats concepts already clear from the tables, and the 'Quality Criteria' section restates what's implicit in the good/poor examples. | 2 / 3 |
Actionability | Provides concrete, executable guidance with specific bash commands, exact file paths, clear markdown output formats, and copy-paste ready examples. The step-by-step execution instructions are fully actionable. | 3 / 3 |
Workflow Clarity | Clear 5-step sequence with explicit checkpoints: analyze session → check existing files → generate proposal → handle save flag → offer next steps. The numbered options at the end provide clear decision points. | 3 / 3 |
Progressive Disclosure | Content is well-organized with clear sections and tables, but everything is inline in one file. The skill is moderately long (~120 lines) and could benefit from separating the detailed proposal format template or examples into referenced files. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
0ebe7ae
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.