CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-researcher

Agent skill for researcher - invoke with $agent-researcher

40

1.88x
Quality

7%

Does it follow best practices?

Impact

98%

1.88x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-researcher/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is critically deficient across all dimensions. It provides no information about what the skill does, when it should be used, or what distinguishes it from other skills. It reads more like a label than a description and would be essentially useless for Claude when selecting among multiple available skills.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Searches the web for information, summarizes academic papers, compiles research findings into structured reports.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to research a topic, find sources, look up information, or compile a literature review.'

Replace the generic term 'researcher' with a more specific domain or niche to reduce conflict risk with other skills, e.g., 'academic research assistant' or 'web research and fact-checking agent.'

DimensionReasoningScore

Specificity

The description provides no concrete actions whatsoever. 'Agent skill for researcher' is entirely vague and does not describe what the skill actually does.

1 / 3

Completeness

Neither 'what does this do' nor 'when should Claude use it' is answered. There is no explanation of capabilities and no 'Use when...' clause or equivalent trigger guidance.

1 / 3

Trigger Term Quality

The only potentially useful keyword is 'researcher,' which is overly generic. There are no natural terms a user would say when needing this skill, aside from the invocation command '$agent-researcher.'

1 / 3

Distinctiveness Conflict Risk

'Researcher' is extremely broad and could overlap with any skill involving research, analysis, information gathering, or data lookup. There is nothing to distinguish it from other skills.

1 / 3

Total

4

/

12

Passed

Implementation

14%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is significantly over-engineered and verbose for what it delivers. It explains many concepts Claude already understands (what code analysis is, how grep works, what dependencies are) while failing to provide a clear, sequenced workflow with validation steps. The MCP tool examples provide some value but use questionable syntax, and the overall structure would benefit greatly from aggressive trimming and splitting into referenced files.

Suggestions

Cut the content by at least 60% — remove 'Core Responsibilities', 'Best Practices', and 'Collaboration Guidelines' sections which describe things Claude already knows, and focus only on the specific MCP tool calls and output format that are unique to this skill.

Add a clear sequenced workflow with validation: e.g., '1. Gather → 2. Analyze → 3. Validate findings against code → 4. Store to memory → 5. Verify memory write succeeded' with explicit checkpoints.

Fix MCP tool call examples to use actual valid invocation syntax rather than JavaScript object literals, and verify these are real tool signatures.

Extract the research output YAML template and MCP tool reference into separate bundle files, keeping SKILL.md as a concise overview with pointers.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive explanation of concepts Claude already knows (what code analysis is, what pattern recognition means, basic grep/glob usage). The YAML output format template, collaboration guidelines, and best practices sections are largely filler that don't add actionable value. Much of this could be cut by 70%+.

1 / 3

Actionability

Contains some concrete code examples (grep patterns, MCP tool calls, search strategies) but many are pseudocode-like or illustrative rather than truly executable. The MCP tool integration examples use JavaScript object syntax that isn't valid function call syntax. The research output format is a template but not tied to specific execution steps.

2 / 3

Workflow Clarity

Despite being a multi-step research process, there are no clear validation checkpoints or feedback loops. The methodology sections (Information Gathering, Pattern Analysis, etc.) are described as abstract lists rather than sequenced workflows. There's no guidance on when research is 'complete enough' or how to verify findings before sharing them.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files and no bundle files to support it. All content is inline regardless of complexity. The MCP tool examples, output format templates, and search strategies could all be split into separate reference files. The document is well over 100 lines with no navigation structure beyond headers.

1 / 3

Total

5

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/claude-flow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.