Pattern for progressively refining context retrieval to solve the subagent context problem
48
35%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./docs/zh-TW/skills/iterative-retrieval/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too abstract and jargon-heavy to be effective for skill selection. It reads more like an academic concept label than an actionable skill description, lacking concrete actions, natural trigger terms, and any explicit guidance on when to use it.
Suggestions
Replace abstract language with concrete actions, e.g., 'Implements iterative context narrowing for subagent tasks by first retrieving broad context, then filtering to relevant sections, then extracting precise details.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when a subagent needs to retrieve relevant context from a large codebase, or when context windows are insufficient for multi-step agent workflows.'
Include natural keywords users might say, such as 'context window', 'agent context', 'retrieval', 'RAG', 'subagent', 'context management', or 'multi-step agent tasks'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses abstract language like 'pattern for progressively refining context retrieval' without listing any concrete actions. It describes a concept rather than specific capabilities. | 1 / 3 |
Completeness | The description weakly addresses 'what' (a pattern for refining context retrieval) and completely omits 'when' — there is no 'Use when...' clause or any explicit trigger guidance. | 1 / 3 |
Trigger Term Quality | The terms 'progressively refining context retrieval' and 'subagent context problem' are technical jargon that users would not naturally say. There are no natural trigger keywords a user would use. | 1 / 3 |
Distinctiveness Conflict Risk | The mention of 'subagent context problem' is somewhat niche and specific to a particular architectural concern, which provides some distinctiveness, but the overall vagueness could still cause confusion with other retrieval or agent-related skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill clearly explains a useful iterative retrieval pattern with good workflow structure and practical examples. Its main weaknesses are that the code examples are illustrative pseudocode rather than executable implementations, and the content includes some unnecessary explanation of the problem space that Claude would already understand. The workflow clarity is strong with explicit cycles, termination conditions, and two detailed walkthrough examples.
Suggestions
Replace illustrative pseudocode with either truly executable code or remove the code wrapper and present as structured instructions - the current functions reference undefined helpers (scoreRelevance, retrieveFiles) which reduces actionability.
Trim the 'Problem' section significantly - Claude already understands subagent context limitations; a single sentence would suffice instead of the current bullet-point explanation.
Move the detailed phase-by-phase code examples into a separate reference file and keep SKILL.md focused on the concise workflow steps and the practical examples.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill has some unnecessary verbosity - the problem section explains things Claude already understands about subagent context limitations. The code examples are illustrative but somewhat lengthy for what is essentially a conceptual pattern rather than executable code. The ASCII diagram adds visual clarity but takes tokens. | 2 / 3 |
Actionability | The code examples are pseudocode/illustrative JavaScript rather than truly executable code - functions like scoreRelevance(), explainRelevance(), retrieveFiles() are undefined. The practical examples (bug fix, feature implementation) are helpful walkthroughs but are descriptive narratives rather than copy-paste ready implementations. The 'integration with agents' section provides a concrete prompt template, which is useful. | 2 / 3 |
Workflow Clarity | The 4-stage cycle (DISPATCH → EVALUATE → REFINE → LOOP) is clearly sequenced with an explicit termination condition (max 3 cycles) and a validation checkpoint (checking if highRelevance.length >= 3 && !hasCriticalGaps). The two practical examples demonstrate the feedback loop clearly with cycle-by-cycle progression showing refinement and stopping criteria. | 3 / 3 |
Progressive Disclosure | The content is structured with clear sections and headers, but it's somewhat monolithic - the detailed code examples for each phase could be in a separate reference file. The 'Related' section references external resources but one link appears to be a Twitter/X URL of questionable permanence. The skill is moderately long (~150 lines of content) and could benefit from splitting implementation details into a separate file. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
5df943e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.