CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-researcher

Agent skill for researcher - invoke with $agent-researcher

40

1.88x
Quality

7%

Does it follow best practices?

Impact

98%

1.88x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-researcher/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is critically underspecified. It provides no information about what the skill does, when it should be used, or what domain it operates in. The only content is a generic label ('researcher') and an invocation command, which is insufficient for Claude to make informed skill selection decisions.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Searches the web for information, summarizes academic papers, compiles research findings into structured reports.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to research a topic, find sources, look up information, or compile findings.'

Replace the generic 'researcher' label with domain-specific language that distinguishes this skill from other potentially overlapping skills (e.g., web search, data analysis, literature review).

DimensionReasoningScore

Specificity

The description provides no concrete actions whatsoever. 'Agent skill for researcher' is entirely vague and does not describe what the skill actually does.

1 / 3

Completeness

Neither 'what does this do' nor 'when should Claude use it' is answered. There is no description of capabilities and no explicit trigger guidance or 'Use when...' clause.

1 / 3

Trigger Term Quality

The only potentially relevant keyword is 'researcher', which is overly generic. There are no natural terms a user would say when needing this skill, aside from the invocation command '$agent-researcher'.

1 / 3

Distinctiveness Conflict Risk

'Researcher' is extremely generic and could overlap with any skill involving research, data gathering, analysis, or information retrieval. There are no distinct triggers to differentiate it.

1 / 3

Total

4

/

12

Passed

Implementation

14%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with generic advice and conceptual explanations that Claude already understands. It describes what a researcher should do rather than providing precise, executable instructions for how to do it. The content would be significantly improved by cutting 60-70% of the text and focusing on the specific MCP tool invocations and output format that are unique to this system.

Suggestions

Remove the 'Core Responsibilities', 'Best Practices', and 'Collaboration Guidelines' sections entirely — these describe obvious behaviors Claude already knows.

Convert the 'Research Methodology' section into a concrete numbered workflow with explicit validation steps (e.g., 'After gathering files, verify coverage by checking file count against glob results').

Move the YAML output format template and MCP tool integration examples into separate referenced files to reduce the main skill's token footprint.

Make code examples fully executable — replace the mixed bash/description blocks with actual runnable commands, and clarify whether MCP tool calls use a specific syntax or are pseudocode.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive explanation of concepts Claude already knows (what code analysis is, what pattern recognition means, generic research methodology). The 'Core Responsibilities' section is pure description that adds no actionable value. The output format template, search strategies, and best practices sections are padded with obvious advice like 'Be Thorough' and 'Think Critically'.

1 / 3

Actionability

Contains some concrete examples like grep commands and MCP tool invocations with JSON payloads, but much of the content is abstract guidance ('Use multiple search strategies', 'Track import statements'). The bash code blocks mix actual commands with non-executable descriptions. The MCP tool examples use JavaScript-like syntax that isn't clearly executable.

2 / 3

Workflow Clarity

Despite being a multi-step research process, there are no clear validation checkpoints or feedback loops. The numbered sections (Information Gathering, Pattern Analysis, etc.) describe categories of activity rather than a sequenced workflow. There's no guidance on when research is 'complete enough' or how to handle dead ends.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. All content is inline including detailed YAML output templates, multiple code blocks, and extensive MCP integration examples. The content would benefit greatly from splitting the output format, MCP integration, and search strategies into separate reference files.

1 / 3

Total

5

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/ruflo
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.