CtrlK
BlogDocsLog inGet started
Tessl Logo

jbvc/iterative-retrieval

Pattern for progressively refining context retrieval to solve the subagent context problem

48

Quality

48%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is too abstract and jargon-heavy to be effective for skill selection. It reads more like an academic concept title than an actionable skill description, lacking concrete actions, natural trigger terms, and any explicit guidance on when Claude should select it.

Suggestions

Replace abstract language with concrete actions, e.g., 'Implements iterative context narrowing for subagent workflows by querying broad context first, then refining based on relevance signals.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when building multi-agent systems that need to retrieve and pass relevant context between agents, or when subagents lack sufficient context to complete tasks.'

Include natural keywords users might say, such as 'agent context', 'RAG', 'context window', 'multi-agent', 'context passing', or 'retrieval refinement'.

DimensionReasoningScore

Specificity

The description uses abstract language like 'progressively refining context retrieval' and 'subagent context problem' without listing any concrete actions. It describes a pattern concept rather than specific capabilities.

1 / 3

Completeness

The description vaguely addresses 'what' (a pattern for refining context retrieval) but provides no 'when' clause or explicit trigger guidance. Both aspects are very weak.

1 / 3

Trigger Term Quality

The terms 'subagent context problem' and 'progressively refining context retrieval' are technical jargon that users would not naturally say. There are no natural keywords a user would use when needing this skill.

1 / 3

Distinctiveness Conflict Risk

The mention of 'subagent context problem' is somewhat niche and specific to a particular domain, which reduces conflict risk slightly, but the vagueness of 'context retrieval' could overlap with many retrieval-related skills.

2 / 3

Total

5

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill clearly articulates a useful pattern for iterative context retrieval in multi-agent workflows, with excellent workflow clarity through its 4-phase loop structure and practical examples. However, the code examples are illustrative pseudocode rather than executable implementations, reducing actionability. The content could be more concise by trimming explanatory sections that describe what Claude would already understand about subagent context limitations.

Suggestions

Replace placeholder functions (scoreRelevance, retrieveFiles, etc.) with concrete implementations or clearly specify the actual tools/APIs Claude should use to implement this pattern

Trim the 'Problem' section significantly—Claude understands context window limitations; a single sentence would suffice instead of the current bullet-point enumeration

Move the detailed code examples for each phase into a separate IMPLEMENTATION.md file, keeping SKILL.md as a concise overview with the diagram, the agent prompt template, and best practices

DimensionReasoningScore

Conciseness

The skill is moderately efficient but includes some unnecessary verbosity. The 'Problem' section explaining what subagents don't know is somewhat obvious to Claude. The ASCII diagram adds visual clarity but the overall content could be tightened—the code examples are illustrative but not executable in any real context, and some explanatory text restates what the code already shows.

2 / 3

Actionability

The code examples are illustrative pseudocode rather than executable implementations—functions like `scoreRelevance`, `explainRelevance`, `retrieveFiles` are undefined placeholders. The practical examples (bug fix, feature implementation) are helpful walkthroughs but remain abstract. The 'Integration with Agents' section provides a usable prompt template, but overall the skill describes a pattern rather than providing copy-paste ready implementation.

2 / 3

Workflow Clarity

The 4-phase loop is clearly sequenced with an explicit termination condition (max 3 cycles), a clear feedback loop (evaluate → refine → loop), and stopping criteria (3 high-relevance files with no critical gaps). The two practical examples demonstrate the workflow concretely with cycle-by-cycle breakdowns showing how refinement works in practice.

3 / 3

Progressive Disclosure

The content is well-structured with clear sections and headers, but it's a fairly long monolithic document (~150 lines of substantive content) that could benefit from splitting detailed examples or the code implementation into separate files. The 'Related' section references external resources but the links appear to be non-functional or speculative (an X/Twitter link with a suspicious URL, undefined skill references).

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Reviewed

Table of Contents