or run

tessl search
Log in

self-improvement

tessl install github:jdrhyne/agent-skills --skill self-improvement

github.com/jdrhyne/agent-skills

Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks.

Review Score

87%

Validation Score

13/16

Implementation Score

77%

Activation Score

100%

Self-Improvement Skill

Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory.

Quick Reference

SituationAction
Command/operation failsLog to .learnings/ERRORS.md
User corrects youLog to .learnings/LEARNINGS.md with category correction
User wants missing featureLog to .learnings/FEATURE_REQUESTS.md
API/external tool failsLog to .learnings/ERRORS.md with integration details
Knowledge was outdatedLog to .learnings/LEARNINGS.md with category knowledge_gap
Found better approachLog to .learnings/LEARNINGS.md with category best_practice
Similar to existing entryLink with **See Also**, consider priority bump
Broadly applicable learningPromote to CLAUDE.md, AGENTS.md, and/or .github/copilot-instructions.md

Setup

Create .learnings/ directory in project root if it doesn't exist:

mkdir -p .learnings

Copy templates from assets/ or create files with headers.

Logging Format

Learning Entry

Append to .learnings/LEARNINGS.md:

## [LRN-YYYYMMDD-XXX] category

**Logged**: ISO-8601 timestamp
**Priority**: low | medium | high | critical
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config

### Summary
One-line description of what was learned

### Details
Full context: what happened, what was wrong, what's correct

### Suggested Action
Specific fix or improvement to make

### Metadata
- Source: conversation | error | user_feedback
- Related Files: path/to/file.ext
- Tags: tag1, tag2
- See Also: LRN-20250110-001 (if related to existing entry)

---

Error Entry

Append to .learnings/ERRORS.md:

## [ERR-YYYYMMDD-XXX] skill_or_command_name

**Logged**: ISO-8601 timestamp
**Priority**: high
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config

### Summary
Brief description of what failed

### Error

Actual error message or output

### Context
- Command/operation attempted
- Input or parameters used
- Environment details if relevant

### Suggested Fix
If identifiable, what might resolve this

### Metadata
- Reproducible: yes | no | unknown
- Related Files: path/to/file.ext
- See Also: ERR-20250110-001 (if recurring)

---

Feature Request Entry

Append to .learnings/FEATURE_REQUESTS.md:

## [FEAT-YYYYMMDD-XXX] capability_name

**Logged**: ISO-8601 timestamp
**Priority**: medium
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config

### Requested Capability
What the user wanted to do

### User Context
Why they needed it, what problem they're solving

### Complexity Estimate
simple | medium | complex

### Suggested Implementation
How this could be built, what it might extend

### Metadata
- Frequency: first_time | recurring
- Related Features: existing_feature_name

---

ID Generation

Format: TYPE-YYYYMMDD-XXX

  • TYPE: LRN (learning), ERR (error), FEAT (feature)
  • YYYYMMDD: Current date
  • XXX: Sequential number or random 3 chars (e.g., 001, A7B)

Examples: LRN-20250115-001, ERR-20250115-A3F, FEAT-20250115-002

Resolving Entries

When an issue is fixed, update the entry:

  1. Change **Status**: pending**Status**: resolved
  2. Add resolution block after Metadata:
### Resolution
- **Resolved**: 2025-01-16T09:00:00Z
- **Commit/PR**: abc123 or #42
- **Notes**: Brief description of what was done

Other status values:

  • in_progress - Actively being worked on
  • wont_fix - Decided not to address (add reason in Resolution notes)
  • promoted - Elevated to CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md

Promoting to Project Memory

When a learning is broadly applicable (not a one-off fix), promote it to permanent project memory.

When to Promote

  • Learning applies across multiple files/features
  • Knowledge any contributor (human or AI) should know
  • Prevents recurring mistakes
  • Documents project-specific conventions

Promotion Targets

TargetWhat Belongs There
CLAUDE.mdProject facts, conventions, gotchas for all Claude interactions
AGENTS.mdAgent-specific workflows, tool usage patterns, automation rules
.github/copilot-instructions.mdProject context and conventions for GitHub Copilot

How to Promote

  1. Distill the learning into a concise rule or fact
  2. Add to appropriate section in target file (create file if needed)
  3. Update original entry:
    • Change **Status**: pending**Status**: promoted
    • Add **Promoted**: CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md

Promotion Examples

Learning (verbose):

Project uses pnpm workspaces. Attempted npm install but failed. Lock file is pnpm-lock.yaml. Must use pnpm install.

In CLAUDE.md (concise):

## Build & Dependencies
- Package manager: pnpm (not npm) - use `pnpm install`

Learning (verbose):

When modifying API endpoints, must regenerate TypeScript client. Forgetting this causes type mismatches at runtime.

In AGENTS.md (actionable):

## After API Changes
1. Regenerate client: `pnpm run generate:api`
2. Check for type errors: `pnpm tsc --noEmit`

Recurring Pattern Detection

If logging something similar to an existing entry:

  1. Search first: grep -r "keyword" .learnings/
  2. Link entries: Add **See Also**: ERR-20250110-001 in Metadata
  3. Bump priority if issue keeps recurring
  4. Consider systemic fix: Recurring issues often indicate:
    • Missing documentation (→ promote to CLAUDE.md or .github/copilot-instructions.md)
    • Missing automation (→ add to AGENTS.md)
    • Architectural problem (→ create tech debt ticket)

Periodic Review

Review .learnings/ at natural breakpoints:

When to Review

  • Before starting a new major task
  • After completing a feature
  • When working in an area with past learnings
  • Weekly during active development

Quick Status Check

# Count pending items
grep -h "Status\*\*: pending" .learnings/*.md | wc -l

# List pending high-priority items
grep -B5 "Priority\*\*: high" .learnings/*.md | grep "^## \["

# Find learnings for a specific area
grep -l "Area\*\*: backend" .learnings/*.md

Review Actions

  • Resolve fixed items
  • Promote applicable learnings
  • Link related entries
  • Escalate recurring issues

Detection Triggers

Automatically log when you notice:

Corrections (→ learning with correction category):

  • "No, that's not right..."
  • "Actually, it should be..."
  • "You're wrong about..."
  • "That's outdated..."

Feature Requests (→ feature request):

  • "Can you also..."
  • "I wish you could..."
  • "Is there a way to..."
  • "Why can't you..."

Knowledge Gaps (→ learning with knowledge_gap category):

  • User provides information you didn't know
  • Documentation you referenced is outdated
  • API behavior differs from your understanding

Errors (→ error entry):

  • Command returns non-zero exit code
  • Exception or stack trace
  • Unexpected output or behavior
  • Timeout or connection failure

Priority Guidelines

PriorityWhen to Use
criticalBlocks core functionality, data loss risk, security issue
highSignificant impact, affects common workflows, recurring issue
mediumModerate impact, workaround exists
lowMinor inconvenience, edge case, nice-to-have

Area Tags

Use to filter learnings by codebase region:

AreaScope
frontendUI, components, client-side code
backendAPI, services, server-side code
infraCI/CD, deployment, Docker, cloud
testsTest files, testing utilities, coverage
docsDocumentation, comments, READMEs
configConfiguration files, environment, settings

Best Practices

  1. Log immediately - context is freshest right after the issue
  2. Be specific - future agents need to understand quickly
  3. Include reproduction steps - especially for errors
  4. Link related files - makes fixes easier
  5. Suggest concrete fixes - not just "investigate"
  6. Use consistent categories - enables filtering
  7. Promote aggressively - if in doubt, add to CLAUDE.md or .github/copilot-instructions.md
  8. Review regularly - stale learnings lose value

Gitignore Options

Keep learnings local (per-developer):

.learnings/

Track learnings in repo (team-wide): Don't add to .gitignore - learnings become shared knowledge.

Hybrid (track templates, ignore entries):

.learnings/*.md
!.learnings/.gitkeep

Hook Integration

Enable automatic reminders through agent hooks. This is opt-in - you must explicitly configure hooks.

Quick Setup (Claude Code / Codex)

Create .claude/settings.json in your project:

{
  "hooks": {
    "UserPromptSubmit": [{
      "matcher": "",
      "hooks": [{
        "type": "command",
        "command": "./skills/self-improvement/scripts/activator.sh"
      }]
    }]
  }
}

This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).

Full Setup (With Error Detection)

{
  "hooks": {
    "UserPromptSubmit": [{
      "matcher": "",
      "hooks": [{
        "type": "command",
        "command": "./skills/self-improvement/scripts/activator.sh"
      }]
    }],
    "PostToolUse": [{
      "matcher": "Bash",
      "hooks": [{
        "type": "command",
        "command": "./skills/self-improvement/scripts/error-detector.sh"
      }]
    }]
  }
}

Available Hook Scripts

ScriptHook TypePurpose
scripts/activator.shUserPromptSubmitReminds to evaluate learnings after tasks
scripts/error-detector.shPostToolUse (Bash)Triggers on command errors

See references/hooks-setup.md for detailed configuration and troubleshooting.

Automatic Skill Extraction

When a learning is valuable enough to become a reusable skill, extract it using the provided helper.

Skill Extraction Criteria

A learning qualifies for skill extraction when ANY of these apply:

CriterionDescription
RecurringHas See Also links to 2+ similar issues
VerifiedStatus is resolved with working fix
Non-obviousRequired actual debugging/investigation to discover
Broadly applicableNot project-specific; useful across codebases
User-flaggedUser says "save this as a skill" or similar

Extraction Workflow

  1. Identify candidate: Learning meets extraction criteria
  2. Run helper (or create manually):
    ./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run
    ./skills/self-improvement/scripts/extract-skill.sh skill-name
  3. Customize SKILL.md: Fill in template with learning content
  4. Update learning: Set status to promoted_to_skill, add Skill-Path
  5. Verify: Read skill in fresh session to ensure it's self-contained

Manual Extraction

If you prefer manual creation:

  1. Create skills/<skill-name>/SKILL.md
  2. Use template from assets/SKILL-TEMPLATE.md
  3. Follow Agent Skills spec:
    • YAML frontmatter with name and description
    • Name must match folder name
    • No README.md inside skill folder

Extraction Detection Triggers

Watch for these signals that a learning should become a skill:

In conversation:

  • "Save this as a skill"
  • "I keep running into this"
  • "This would be useful for other projects"
  • "Remember this pattern"

In learning entries:

  • Multiple See Also links (recurring issue)
  • High priority + resolved status
  • Category: best_practice with broad applicability
  • User feedback praising the solution

Skill Quality Gates

Before extraction, verify:

  • Solution is tested and working
  • Description is clear without original context
  • Code examples are self-contained
  • No project-specific hardcoded values
  • Follows skill naming conventions (lowercase, hyphens)

Multi-Agent Support

This skill works across different AI coding agents with agent-specific activation.

Claude Code

Activation: Hooks (UserPromptSubmit, PostToolUse) Setup: .claude/settings.json with hook configuration Detection: Automatic via hook scripts

Codex CLI

Activation: Hooks (same pattern as Claude Code) Setup: .codex/settings.json with hook configuration Detection: Automatic via hook scripts

GitHub Copilot

Activation: Manual (no hook support) Setup: Add to .github/copilot-instructions.md:

## Self-Improvement

After solving non-obvious issues, consider logging to `.learnings/`:
1. Use format from self-improvement skill
2. Link related entries with See Also
3. Promote high-value learnings to skills

Ask in chat: "Should I log this as a learning?"

Detection: Manual review at session end

Agent-Agnostic Guidance

Regardless of agent, apply self-improvement when you:

  1. Discover something non-obvious - solution wasn't immediate
  2. Correct yourself - initial approach was wrong
  3. Learn project conventions - discovered undocumented patterns
  4. Hit unexpected errors - especially if diagnosis was difficult
  5. Find better approaches - improved on your original solution

Copilot Chat Integration

For Copilot users, add this to your prompts when relevant:

After completing this task, evaluate if any learnings should be logged to .learnings/ using the self-improvement skill format.

Or use quick prompts:

  • "Log this to learnings"
  • "Create a skill from this solution"
  • "Check .learnings/ for related issues"