Pattern for progressively refining context retrieval to solve the subagent context problem
47
35%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/iterative-retrieval/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too abstract and jargon-heavy to be effective for skill selection. It reads more like an academic concept label than an actionable skill description, lacking concrete actions, natural trigger terms, and any explicit guidance on when to use it.
Suggestions
Replace abstract language with concrete actions, e.g., 'Implements iterative context narrowing for subagent tasks by first retrieving broad context, then filtering to relevant sections, then extracting precise details.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when a subagent needs to retrieve and refine context from a large codebase, knowledge base, or document set to answer a specific question.'
Include natural keywords users might say, such as 'context window', 'subagent', 'retrieval', 'search refinement', 'narrowing results', or 'multi-step lookup'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses abstract language like 'pattern for progressively refining context retrieval' without listing any concrete actions. It describes a concept rather than specific capabilities. | 1 / 3 |
Completeness | The description weakly addresses 'what' (a pattern for refining context retrieval) and completely omits 'when' — there is no 'Use when...' clause or any explicit trigger guidance. | 1 / 3 |
Trigger Term Quality | The terms 'progressively refining context retrieval' and 'subagent context problem' are technical jargon that users would not naturally say. There are no natural trigger keywords a user would use. | 1 / 3 |
Distinctiveness Conflict Risk | The mention of 'subagent context problem' is somewhat niche and specific to a particular architectural concern, which provides some distinctiveness, but the overall vagueness could still cause confusion with other context-management or retrieval-related skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill clearly communicates a useful conceptual pattern for iterative context retrieval with good workflow structure and practical examples. However, it leans more toward describing a methodology than providing executable, tool-specific guidance—the examples are illustrative walkthroughs rather than actionable code. Some sections explain things Claude would already understand, and the content could be more concise while maintaining its clarity.
Suggestions
Replace pseudocode examples with executable implementations—e.g., actual grep/ripgrep commands, file reading tool calls, or a concrete Python/bash script that implements the dispatch-evaluate-refine loop.
Remove the 'The Problem' section's 'Standard approaches fail' bullets, which state obvious tradeoffs Claude already understands, and fold any essential framing into a single sentence.
Add concrete tool usage guidance—specify which tools (grep, find, Read, etc.) to use in each phase, with exact command patterns rather than abstract 'search for keywords' instructions.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill includes some unnecessary explanation of the problem space (the 'Standard approaches fail' section states obvious things Claude would know). The ASCII diagram adds visual clarity but the overall content could be tightened—the pattern itself is relatively simple but takes ~120 lines to convey. | 2 / 3 |
Actionability | The skill describes a conceptual pattern with pseudocode-like examples rather than executable code. The 'Integration with Agents' section provides a markdown prompt template but no actual implementation—no real search commands, no API calls, no tool usage. The examples illustrate the concept well but aren't copy-paste executable. | 2 / 3 |
Workflow Clarity | The 4-phase loop is clearly sequenced with explicit stop conditions (3+ high-relevance files, no critical gaps, max cycles reached). The two practical examples walk through the cycle-by-cycle progression with evaluation scores and refinement decisions, providing clear feedback loops. | 3 / 3 |
Progressive Disclosure | The content is reasonably well-structured with clear sections, but it's somewhat monolithic—all content is inline in a single file. The 'Related' section references external resources but the links are to external URLs and vague skill references rather than well-signaled companion files with clear purposes. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
79cc4e3
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.