CtrlK
BlogDocsLog inGet started
Tessl Logo

iterative-retrieval

Pattern for progressively refining context retrieval to solve the subagent context problem

48

Quality

35%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./docs/zh-TW/skills/iterative-retrieval/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is too abstract and jargon-heavy to be effective for skill selection. It reads more like an academic concept label than an actionable skill description, lacking concrete actions, natural trigger terms, and any explicit guidance on when to use it.

Suggestions

Replace abstract language with concrete actions, e.g., 'Implements iterative context narrowing for subagent workflows by starting with broad searches and progressively filtering results to find relevant information.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when a subagent needs to retrieve specific context from a large codebase, when search results are too broad, or when multi-step retrieval refinement is needed.'

Include natural keywords users might say, such as 'search refinement', 'finding relevant code', 'narrowing search results', 'subagent search', or 'context window management'.

DimensionReasoningScore

Specificity

The description uses abstract language like 'pattern for progressively refining context retrieval' without listing any concrete actions. It describes a concept rather than specific capabilities.

1 / 3

Completeness

The description weakly addresses 'what' (a pattern for refining context retrieval) and completely lacks any 'when' clause or explicit trigger guidance for when Claude should use this skill.

1 / 3

Trigger Term Quality

The terms 'progressively refining context retrieval' and 'subagent context problem' are technical jargon that users would not naturally say. There are no natural trigger keywords a user would use.

1 / 3

Distinctiveness Conflict Risk

The mention of 'subagent context problem' is somewhat niche and specific to a particular domain, which reduces conflict risk slightly, but the overall vagueness of 'context retrieval' could overlap with many retrieval-related skills.

2 / 3

Total

5

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill clearly communicates a useful iterative retrieval pattern with good workflow structure and practical examples. Its main weaknesses are that the code examples are conceptual rather than executable (relying on undefined helper functions), and the problem statement section over-explains concepts Claude already understands. The content would benefit from being more concrete about actual tool usage and trimming explanatory padding.

Suggestions

Replace conceptual pseudocode with executable examples or explicitly state these are pattern templates to be adapted, showing at least one concrete implementation using actual tools (e.g., grep, find, or a specific retrieval API).

Trim the 'Problem' section significantly—Claude already understands subagent context limitations; a single sentence framing the problem is sufficient.

Make the 'Integration with Agents' section more actionable by providing a complete, copy-paste-ready prompt template rather than a simplified markdown snippet.

DimensionReasoningScore

Conciseness

The skill includes some unnecessary verbosity—the problem section explains things Claude already understands about subagent context limitations, and the ASCII diagram adds tokens without much clarity. However, the code examples and practical examples are reasonably efficient.

2 / 3

Actionability

The code examples are illustrative pseudocode/conceptual JavaScript rather than truly executable, copy-paste-ready code. Functions like `scoreRelevance`, `explainRelevance`, `retrieveFiles` are undefined abstractions. The practical examples (bug fix, feature implementation) are helpful walkthroughs but are descriptive narratives rather than executable guidance.

2 / 3

Workflow Clarity

The 4-stage cycle (DISPATCH → EVALUATE → REFINE → LOOP) is clearly sequenced with explicit stopping conditions (3 high-relevance files, no critical gaps) and a max-cycles cap. The two practical examples demonstrate the feedback loop clearly, showing how evaluation results drive refinement.

3 / 3

Progressive Disclosure

The content is well-structured with clear sections, but it's a monolithic document (~150 lines) that could benefit from splitting detailed examples or the integration guide into separate files. The references at the bottom point to external resources but one is a Twitter/X link of questionable stability, and the other references are vague ('continuous-learning skill', agent definitions directory).

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
affaan-m/everything-claude-code
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.