Extract readable transcripts from Claude Code and Codex CLI session JSONL files
78
67%
Does it follow best practices?
Impact
95%
11.87xAverage score across 3 eval scenarios
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./data/skills-md/0xbigboss/claude-code/extract-transcripts/SKILL.mdQuality
Discovery
54%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear, distinctive niche with good natural trigger terms specific to Claude Code and Codex CLI workflows. However, it lacks a 'Use when...' clause and only describes a single action, limiting its completeness and specificity. Adding explicit trigger guidance and expanding the list of concrete capabilities would significantly improve it.
Suggestions
Add a 'Use when...' clause, e.g., 'Use when the user wants to read, review, or share Claude Code or Codex CLI session logs, or mentions JSONL session files.'
List additional concrete actions beyond 'extract transcripts', such as 'parse conversation turns, format tool use blocks, summarize session activity, convert JSONL to readable markdown.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (transcript extraction) and the specific input formats (JSONL files from Claude Code and Codex CLI), but only describes one action ('extract readable transcripts') rather than listing multiple concrete capabilities. | 2 / 3 |
Completeness | Answers 'what' (extract readable transcripts from JSONL files) but completely lacks a 'Use when...' clause or any explicit trigger guidance, which per the rubric caps completeness at 2, and since the 'when' is entirely missing, it falls to 1. | 1 / 3 |
Trigger Term Quality | Includes highly specific natural keywords users would say: 'transcripts', 'Claude Code', 'Codex CLI', 'session', 'JSONL'. These are distinctive terms a user would naturally use when needing this skill. | 3 / 3 |
Distinctiveness Conflict Risk | Very clear niche targeting specifically Claude Code and Codex CLI session JSONL files. This is unlikely to conflict with other skills due to the highly specific tool names and file format mentioned. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
79%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-written, actionable skill that provides concrete commands for transcript extraction across multiple tools. Its main strengths are excellent conciseness and fully executable examples. The weaknesses are minor: the DuckDB indexing workflow could benefit from explicit sequencing/validation steps, and the content could be better structured with progressive disclosure for the advanced indexing features.
Suggestions
Add a brief workflow sequence for the DuckDB indexer (e.g., '1. Index sessions → 2. Verify with `recent` → 3. Search or show') with a note about checking index success before querying.
Consider splitting the DuckDB-based transcript index section into a separate INDEXING.md file referenced from the main skill, keeping SKILL.md focused on basic extraction.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient. It doesn't explain what JSONL files are, what Claude Code is, or how transcripts work conceptually. Every section provides direct, actionable command examples without padding. | 3 / 3 |
Actionability | Every section provides concrete, copy-paste-ready commands with full paths. Options are clearly listed, file locations are specified, and the output format is described concisely. No pseudocode or vague instructions. | 3 / 3 |
Workflow Clarity | The skill presents clear individual commands but lacks explicit workflow sequencing for multi-step processes like indexing then searching. The DuckDB section implies a workflow (index → search/recent/show) but doesn't explicitly sequence it or include validation checkpoints (e.g., verifying index was built successfully before querying). | 2 / 3 |
Progressive Disclosure | The content is well-organized with clear sections, but it's somewhat long for a single file with no references to supporting documentation. The DuckDB section could be split into a separate reference file. No bundle files are provided to evaluate external references against, but the skill would benefit from separating the advanced indexing features. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
aa009ea
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.