Pattern for progressively refining context retrieval to solve the subagent context problem
48
35%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./docs/zh-TW/skills/iterative-retrieval/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too abstract and jargon-heavy to be effective for skill selection. It reads more like an academic concept label than an actionable skill description, lacking concrete actions, natural trigger terms, and any explicit guidance on when to use it.
Suggestions
Replace abstract language with concrete actions, e.g., 'Implements iterative context narrowing for subagent tasks by first retrieving broad context, then filtering to relevant sections, then extracting precise details.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when a subagent needs to retrieve and refine context from a large codebase, knowledge base, or document set to answer a specific question.'
Include natural keywords users might say, such as 'context window', 'subagent', 'retrieval', 'search refinement', 'narrowing results', or 'multi-step lookup'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses abstract language like 'pattern for progressively refining context retrieval' without listing any concrete actions. It describes a concept rather than specific capabilities. | 1 / 3 |
Completeness | The description weakly addresses 'what' (a pattern for refining context retrieval) and completely omits 'when' — there is no 'Use when...' clause or any explicit trigger guidance. | 1 / 3 |
Trigger Term Quality | The terms 'progressively refining context retrieval' and 'subagent context problem' are technical jargon that users would not naturally say. There are no natural trigger keywords a user would use. | 1 / 3 |
Distinctiveness Conflict Risk | The mention of 'subagent context problem' is somewhat niche and specific to a particular architectural concern, which provides some distinctiveness, but the overall vagueness could still cause confusion with other context-management or retrieval-related skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill clearly articulates a useful pattern for iterative context retrieval in multi-agent workflows, with strong workflow clarity through its 4-stage cycle and practical examples. However, the code examples are illustrative pseudocode rather than executable implementations, and the content could be more concise by trimming the problem statement and assuming Claude's familiarity with subagent limitations. The pattern is well-structured but would benefit from more actionable, copy-paste ready guidance.
Suggestions
Make code examples more actionable by either providing real executable implementations or explicitly framing them as prompt templates/pseudocode patterns that Claude should adapt, rather than presenting them as JavaScript functions with undefined helper methods.
Trim the 'Problem' section significantly - Claude already understands subagent context limitations. A single sentence framing the problem would suffice.
Consider splitting the detailed phase-by-phase code examples into a separate reference file, keeping SKILL.md focused on the quick-start workflow and the practical examples.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill includes some unnecessary verbosity - the problem section explains things Claude already understands about subagent context limitations. The code examples are illustrative but somewhat lengthy for what is essentially a conceptual pattern rather than executable code. The ASCII diagram adds visual clarity but the overall content could be tightened. | 2 / 3 |
Actionability | The code examples are pseudocode/illustrative rather than truly executable - functions like scoreRelevance(), explainRelevance(), retrieveFiles() are undefined. The practical examples (bug fix, feature implementation) are helpful walkthroughs but are descriptive narratives rather than copy-paste ready implementations. The 'integration with agents' section provides a concrete prompt template, which is useful. | 2 / 3 |
Workflow Clarity | The 4-stage cycle (DISPATCH → EVALUATE → REFINE → LOOP) is clearly sequenced with an explicit termination condition (max 3 cycles), a validation checkpoint (checking for sufficient high-relevance files and no critical gaps), and a feedback loop (refine and retry). The two practical examples demonstrate the workflow concretely with cycle-by-cycle progression. | 3 / 3 |
Progressive Disclosure | The content is structured with clear sections and headers, but it's somewhat monolithic - the detailed code examples for each phase could potentially be split into separate reference files. The 'Related' section at the end references external resources but one link appears to be a Twitter/X URL of questionable permanence. The skill is borderline long enough that some content could benefit from being externalized. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
ae2cadd
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.