Agent skill for researcher - invoke with $agent-researcher
40
7%
Does it follow best practices?
Impact
98%
1.88xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-researcher/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is critically deficient across all dimensions. It provides no information about what the skill does, when it should be used, or what distinguishes it from other skills. It reads more like a label than a functional description that Claude could use for skill selection.
Suggestions
Describe specific concrete actions the researcher skill performs (e.g., 'Searches academic databases, summarizes papers, compiles literature reviews, extracts citations').
Add an explicit 'Use when...' clause with natural trigger terms (e.g., 'Use when the user asks to research a topic, find sources, review literature, or gather information from multiple sources').
Remove the invocation syntax ('invoke with $agent-researcher') from the description and replace it with capability and trigger information that helps Claude select the right skill.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description provides no concrete actions whatsoever. 'Agent skill for researcher' is entirely vague with no indication of what the skill actually does. | 1 / 3 |
Completeness | Neither 'what does this do' nor 'when should Claude use it' is answered. There is no explanation of capabilities and no 'Use when...' clause or equivalent trigger guidance. | 1 / 3 |
Trigger Term Quality | The only potentially relevant keyword is 'researcher', which is overly generic. The description includes '$agent-researcher' which is a command invocation, not a natural user trigger term. | 1 / 3 |
Distinctiveness Conflict Risk | 'Researcher' is extremely broad and could conflict with any skill involving research, information gathering, analysis, or investigation. There are no distinct triggers to differentiate it. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
14%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily padded with generic advice and concepts Claude already understands (how to search code, what dependencies are, 'think critically'). While the MCP tool integration examples provide some concrete value, the majority of the content describes obvious research practices without adding novel, project-specific guidance. The lack of a clear workflow with validation steps and the monolithic structure significantly reduce its effectiveness.
Suggestions
Cut the content by 60-70%: remove generic research advice (Be Thorough, Think Critically, etc.), obvious methodology steps, and focus only on the specific MCP tool syntax and memory coordination patterns that are unique to this system.
Add a clear sequential workflow with decision points: e.g., 1. Check memory for prior research → 2. If found, build on it → 3. Gather via specific tools → 4. Validate findings against code → 5. Store structured results in memory.
Extract the YAML output format template and MCP tool reference into separate files (e.g., RESEARCH_OUTPUT_FORMAT.md, MCP_TOOLS.md) and reference them from a concise overview.
Replace the abstract 'Search Strategies' section with 2-3 concrete, copy-paste-ready examples tied to specific common research tasks in this project's context.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive content Claude already knows - how to grep, how to read files, what 'pattern recognition' means, generic best practices like 'Be Thorough' and 'Think Critically'. The research methodology section describes basic investigation steps any competent agent would follow. The YAML output format template and search strategies are generic padding. | 1 / 3 |
Actionability | Contains some concrete examples like grep commands and MCP tool invocations with JSON payloads, but much of the content is abstract guidance ('Track import statements', 'Identify external package dependencies', 'Review commit messages for context'). The MCP tool examples show specific syntax which is useful, but the bash examples are illustrative rather than executable in context. | 2 / 3 |
Workflow Clarity | The numbered sections (Information Gathering, Pattern Analysis, etc.) describe categories of activity rather than a clear sequential workflow. There are no validation checkpoints, no feedback loops for when research hits dead ends, and no clear decision points for when to stop researching and report findings. For a multi-step research process, this lacks meaningful sequencing. | 1 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files. All content is inline including lengthy YAML templates, multiple code blocks, and detailed MCP examples. The content would benefit greatly from splitting the output format template, MCP integration details, and search strategies into separate reference files. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
f547cec
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.