CtrlK
BlogDocsLog inGet started
Tessl Logo

tdg-personal/iterative-retrieval

Pattern for progressively refining context retrieval to solve the subagent context problem

47

Quality

47%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is too abstract and jargon-heavy to be effective for skill selection. It reads more like an academic concept label than an actionable skill description, lacking concrete actions, natural trigger terms, and any explicit guidance on when to use it.

Suggestions

Replace abstract language with concrete actions, e.g., 'Implements iterative context narrowing for subagent tasks by first retrieving broad context, then filtering to relevant sections, then extracting precise details.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when a subagent needs to retrieve and refine context from large codebases, documentation, or knowledge bases to answer specific questions.'

Include natural keywords users might say, such as 'context window', 'RAG', 'retrieval', 'subagent', 'knowledge lookup', 'search refinement'.

DimensionReasoningScore

Specificity

The description uses abstract language like 'pattern for progressively refining context retrieval' without listing any concrete actions. It describes a concept rather than specific capabilities.

1 / 3

Completeness

The description weakly addresses 'what' (a pattern for refining context retrieval) and completely omits 'when' — there is no 'Use when...' clause or any explicit trigger guidance.

1 / 3

Trigger Term Quality

The terms 'progressively refining context retrieval' and 'subagent context problem' are technical jargon that users would not naturally say. There are no natural trigger keywords a user would use.

1 / 3

Distinctiveness Conflict Risk

The mention of 'subagent context problem' is somewhat niche and specific to a particular architectural concern, which provides some distinctiveness, but the overall vagueness could still cause confusion with other retrieval or agent-related skills.

2 / 3

Total

5

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill clearly communicates a useful multi-agent retrieval pattern with strong workflow clarity and good practical examples. Its main weaknesses are that the code examples are illustrative pseudocode rather than executable implementations, and the content is somewhat verbose for what could be a more concise pattern description. The document would benefit from either making the code truly executable or trimming it to focus on the conceptual pattern with concrete agent prompt templates.

Suggestions

Make code examples executable or replace with concrete agent prompt templates that Claude can directly use, rather than pseudocode with undefined helper functions like `scoreRelevance` and `retrieveFiles`.

Trim the 'Problem' section—Claude already understands context window limitations and subagent constraints; a single sentence would suffice.

Consider splitting the detailed phase-by-phase code examples into a separate reference file, keeping SKILL.md focused on the pattern overview, practical examples, and integration instructions.

DimensionReasoningScore

Conciseness

The skill is moderately efficient but includes some unnecessary verbosity. The 'Problem' section explaining what subagents don't know is somewhat obvious to Claude. The ASCII diagram adds visual clarity but the overall content could be tightened—the code examples are illustrative pseudocode that could be more compact while conveying the same pattern.

2 / 3

Actionability

The code examples are pseudocode/illustrative rather than executable—functions like `scoreRelevance`, `explainRelevance`, `retrieveFiles` are undefined placeholders. The practical examples (bug fix, feature implementation) are helpful walkthroughs but the agent integration section is a brief markdown snippet rather than a concrete, copy-paste-ready implementation. The pattern is clearly described but not directly executable.

2 / 3

Workflow Clarity

The 4-phase loop (DISPATCH → EVALUATE → REFINE → LOOP) is clearly sequenced with explicit stopping conditions (max 3 cycles, relevance >= 0.7, hasCriticalGaps check). The two practical examples demonstrate the feedback loop in action, showing how evaluation drives refinement and when to stop. The workflow includes clear validation checkpoints at each cycle.

3 / 3

Progressive Disclosure

The content is well-structured with clear sections and headers, but it's a fairly long monolithic document (~150 lines of substantive content) that could benefit from splitting detailed examples or the code patterns into separate reference files. The 'Related' section references external resources but the main content is all inline rather than appropriately split.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents