Look up prior Claude Code sessions when context is lost or forgotten. Use when asked "what did we do before?", "what happened in the last session?", "I forgot what we were working on", "find what I told you about X", or any request to recall past conversation history, prior decisions, experiments, or outcomes. Searches raw JSONL transcripts from ~/.claude/projects/ via DuckDB index. Returns verbatim user messages and summarizes AI actions and sub-agent outcomes. Summaries cached at ~/.claude/kaizen/session-summaries/.
90
88%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that hits all the marks. It provides specific concrete actions, includes abundant natural trigger phrases users would actually say, explicitly addresses both what the skill does and when to use it, and occupies a clearly distinct niche. The technical details (DuckDB, JSONL, file paths) add precision without sacrificing readability.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: look up prior sessions, search raw JSONL transcripts via DuckDB index, return verbatim user messages, summarize AI actions and sub-agent outcomes, cache summaries. Very concrete and detailed. | 3 / 3 |
Completeness | Clearly answers both 'what' (searches JSONL transcripts via DuckDB, returns verbatim messages, summarizes AI actions) and 'when' (explicit 'Use when...' clause with multiple natural trigger phrases and scenario descriptions). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger phrases users would actually say: 'what did we do before?', 'what happened in the last session?', 'I forgot what we were working on', 'find what I told you about X', plus general terms like 'recall past conversation history', 'prior decisions', 'experiments', 'outcomes'. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche: specifically about recalling prior Claude Code sessions from JSONL transcripts in ~/.claude/projects/. The specific tooling (DuckDB index), file paths, and focus on session history make it very unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted skill with excellent actionability — every command is concrete, executable, and clearly documented with arguments, options, and exit codes. The workflow section is clear and well-sequenced. The main weakness is that the command reference and summary template sections are quite lengthy inline, making the file heavier than necessary when a progressive disclosure approach with separate reference files would be more token-efficient.
Suggestions
Move the detailed command reference (errors, tools, irritation, current-path) to a separate COMMANDS.md file and link to it from the main skill, keeping only the core commands (list, messages, search, show, index) inline.
Consider moving the summary template to a separate file (e.g., SUMMARY_TEMPLATE.md) and referencing it, since it's a large block that's only needed during summary generation.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient and avoids explaining concepts Claude already knows, but the command reference section is quite lengthy with detailed descriptions of each subcommand (errors, tools, irritation, current-path) that could be more terse or offloaded to a separate reference file. The summary template is also quite large inline. | 2 / 3 |
Actionability | Every command is fully executable with exact CLI invocations, concrete examples, and specific flags. The workflow steps are copy-paste ready, the summary template is complete, and the JSONL schema reference provides the exact JSON structure needed to parse transcripts. | 3 / 3 |
Workflow Clarity | The 'I forgot what happened' workflow is clearly sequenced with numbered steps progressing from listing sessions → reading messages → searching → generating summaries → marking as summarized. The fidelity rules serve as validation checkpoints, and the index rebuild note addresses staleness. The workflow is well-structured for the task's complexity. | 3 / 3 |
Progressive Disclosure | The skill references a script file path and index locations appropriately, but the command reference section (errors, tools, irritation, current-path) is extensive inline content that would benefit from being in a separate COMMANDS.md or REFERENCE.md file. The summary template is also large and could be externalized. Without bundle files provided, the single-file approach makes the skill longer than ideal. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
b9f32ec
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.