CtrlK
BlogDocsLog inGet started
Tessl Logo

self-documentation

Explain Claude Code features, capabilities, and tools. Use for questions like "how do skills work?", "what can you do?", "what's the difference between X and Y?", "should I use a skill or slash command?". Also handles recording observations about undocumented behaviors. Invoke with /self-documentation or ask naturally.

Install with Tessl CLI

npx tessl i github:ddehart/claude-code-plugins --skill self-documentation
What are skills?

Overall
score

18%

Does it follow best practices?

Validation for skill structure

Validation failed for this skill
This skill has errors that need to be fixed before it can move to Implementation and Discovery review.
SKILL.md
Review
Evals

Self-Documentation Skill

Enable Claude Code to explain its own features, guide capability decisions, and track undocumented observations discovered through usage.

When to Activate

This skill activates for:

  • Direct questions: "How do skills work?", "What can you do?", "What are your capabilities?"
  • Decision questions: "Should I use a skill or slash command?", "Is this a good use case for X?"
  • Comparison questions: "What's the difference between X and Y?", "When should I use X vs Y?"
  • Best practices: "What's the right way to do X?", "How should I structure Y?"
  • Observation recording: User reports discovering undocumented behavior

Reference Files

The skill uses thematic reference files for token-efficient progressive disclosure:

FileContent
topic-index.mdKeyword-to-file mapping (load first)
decision-guide.mdFeature comparisons and decision trees
core-features.mdSkills, agents, MCP, plugins, hooks, slash commands
configuration.mdSettings, permissions, CLAUDE.md, sandboxing
integrations.mdVS Code, IDE extensions, plugin marketplace
workflows.mdKeyboard shortcuts, checkpointing, git automation
undocumented.mdFeatures from release notes without official docs
sdk-behavioral-bridges.mdBehavioral constraints from Agent SDK docs
best-practices.mdMeta-guidance: effective usage, failure patterns, prompt crafting

Workflow: Answering Questions

Step 1: Load Topic Index

Read references/topic-index.md to identify which reference file(s) contain relevant information.

Step 2: Identify Relevant Files

Match the user's question keywords to the topic index. For cross-theme questions (e.g., "How do skills work with MCP servers?"), identify multiple relevant files.

Step 3: Load Reference Files

Read the identified reference file(s). Only load what's needed - avoid loading all files.

Step 4: Fetch Official Docs (If Needed)

If the reference file includes a documentation URL and more detail is needed:

  1. Use WebFetch to retrieve the official documentation
  2. Synthesize with reference file content

For behavioral constraints: Check sdk-behavioral-bridges.md for authoritative information from the Agent SDK docs (e.g., tool limitations, permission flow, subagent constraints).

Step 5: Respond

Provide a conversational response that:

  • Directly answers the user's question
  • Highlights key concepts
  • Makes opinionated recommendations for decision questions (don't present all options as equal)
  • Does NOT proactively suggest related topics

Step 6: Update Reference (If New Info Learned)

If WebFetch revealed new information, update the reference file's "Key concepts" section with today's date.

Workflow: Decision Questions

For questions about choosing between features ("Should I use X or Y?"):

  1. Load references/decision-guide.md
  2. Find the relevant comparison section
  3. Make a specific recommendation based on the user's context
  4. Explain why this choice is best
  5. Provide actionable next steps

Response pattern:

"For your use case, I recommend [specific choice]. Here's why:

[Reasoning based on user's context]

Next step: [Concrete action to take]"

Workflow: Recording Observations

When a user discovers undocumented behavior (e.g., "I found that AskUserQuestion can't be delegated to subagents"):

Step 1: Confirm Value

Verify this is a new observation worth recording:

  • Is it about Claude Code behavior?
  • Is it not already documented?
  • Is it reproducible or specific enough to be useful?

Step 2: Check for Duplicates

Before creating an issue, check if this observation already exists:

  1. List existing observations:

    python3 "${CLAUDE_PLUGIN_ROOT}/skills/self-documentation/scripts/observations.py" list
  2. Search the output for similar observations by description keywords or feature_area

If empty output: This is the first observation - proceed to Step 3.

If a similar observation exists:

  • Inform the user: "This observation appears similar to an existing one: [description]"
  • Ask if they want to add anyway or skip

If no duplicate found: Proceed to Step 3.

Step 3: Propose Issue Creation

Ask the user:

"This seems like a valuable observation. Would you like me to create a GitHub issue in the plugin repo to track it?"

Wait for user confirmation.

Step 4: Create GitHub Issue

Use gh CLI to create an issue in ddehart/claude-code-plugins:

gh issue create --repo ddehart/claude-code-plugins \
  --title "[Observation] <brief description>" \
  --body "<formatted body>" \
  --label "observation"

Issue format:

## Observation
<description of the behavior>

## Reproduction Context
- Discovered: <date>
- Context: <what the user was trying to do>
- Version: <Claude Code version if known>

## Related
- Feature area: <e.g., tools, subagents, skills>

Step 5: Handle gh CLI Failure

If gh is unavailable or authentication fails:

  1. Format the issue content
  2. Display to user:

    "I couldn't create the issue automatically. Here's the formatted content you can paste into a new issue at: https://github.com/ddehart/claude-code-plugins/issues/new"

  3. Continue to Step 6

Step 6: Save Observation

After issue creation (or if user declined), save the observation locally using the helper script:

python3 "${CLAUDE_PLUGIN_ROOT}/skills/self-documentation/scripts/observations.py" add \
  --description "The observed behavior" \
  --feature-area "tools" \
  --context "How it was discovered" \
  --issue-url "https://github.com/..." # if issue was created

The script handles:

  • Creating the storage directory if needed (~/.local/share/claude-plugins/self-documentation/)
  • Generating a unique observation ID
  • Setting the discovered date
  • Atomic file write

Storage Schema

Observations are stored in $XDG_DATA_HOME/claude-plugins/self-documentation/observations.json (defaults to ~/.local/share/...):

{
  "version": 1,
  "observations": [
    {
      "id": "obs-20260111-001",
      "description": "...",
      "context": "...",
      "discovered": "YYYY-MM-DD",
      "feature_area": "tools|skills|agents|mcp|config|other",
      "issue_url": "https://github.com/...",
      "status": "new|submitted"
    }
  ],
  "last_updated": "YYYY-MM-DD"
}

Workflow: Observation Lifecycle

When referencing stored observations (e.g., to help answer a question):

  1. List observations:

    python3 "${CLAUDE_PLUGIN_ROOT}/skills/self-documentation/scripts/observations.py" list
  2. For each relevant observation, check if it's now documented:

    • Search reference files for the described behavior
    • Check if it appears in official docs
  3. If now documented → remove it:

    python3 "${CLAUDE_PLUGIN_ROOT}/skills/self-documentation/scripts/observations.py" remove <obs-id>

    Inform user: "Observation [id] is now documented. Removing from local cache."

Error Handling

ScenarioAction
Reference file not foundFall back to topic-index, report which file is missing
WebFetch failsUse cached key concepts from reference file
gh CLI unavailableProvide formatted issue for manual creation
Cross-theme question unclearLoad topic-index, ask user to clarify if needed
Similar observation existsUse observations.py list to check, inform user, ask if they want to add anyway

Plugin Repo

Issues and PRs go to: ddehart/claude-code-plugins

This skill is part of the meta-claude plugin in that marketplace.

Repository
github.com/ddehart/claude-code-plugins
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.