Agent skill for researcher - invoke with $agent-researcher
44
13%
Does it follow best practices?
Impact
98%
1.88xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-researcher/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is critically deficient across all dimensions. It provides no information about what the skill does, when it should be used, or what distinguishes it from other skills. It reads more like a label than a functional description that Claude could use for skill selection.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Searches the web for information, summarizes academic papers, compiles research findings into structured reports.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to research a topic, find sources, look up information, or compile findings.'
Narrow the scope to a specific research domain or method to reduce conflict risk with other skills, e.g., 'academic research', 'market research', or 'web research'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for researcher' is entirely vague and does not describe what the skill actually does. | 1 / 3 |
Completeness | Neither 'what does this do' nor 'when should Claude use it' is answered. There is no 'Use when...' clause and no description of capabilities. | 1 / 3 |
Trigger Term Quality | The only potentially relevant keyword is 'researcher', which is overly generic. There are no natural terms a user would say when needing this skill, aside from the invocation command '$agent-researcher'. | 1 / 3 |
Distinctiveness Conflict Risk | 'Researcher' is extremely broad and could conflict with any skill involving research, data gathering, analysis, or information retrieval. There are no distinct triggers to differentiate it. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is significantly over-engineered and verbose for what it accomplishes. It explains many concepts Claude already understands (grep usage, what dependency analysis means, generic best practices like 'be thorough' and 'think critically'), consuming substantial token budget without proportional value. The MCP tool integration examples provide some concrete guidance, but the overall structure lacks validation checkpoints and would benefit greatly from being condensed and split into focused reference files.
Suggestions
Cut the content by at least 60% — remove generic advice ('Be Thorough', 'Think Critically'), obvious search patterns, and conceptual explanations Claude already knows. Focus only on project-specific conventions and MCP tool syntax.
Add validation/verification steps to the research workflow — e.g., cross-check findings, verify dependency versions actually match package.json, confirm patterns exist in multiple locations before reporting.
Split MCP tool integration and output format templates into separate reference files, keeping SKILL.md as a concise overview with links.
Replace the descriptive bullet lists ('Track import statements', 'Identify external package dependencies') with concrete, executable command sequences or tool invocations.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive content Claude already knows (how to grep, what pattern analysis is, what dependencies are). The research methodology, best practices, and collaboration guidelines are largely generic advice that doesn't add actionable value. The YAML output format template and multiple bash/JS examples are padded with obvious patterns. | 1 / 3 |
Actionability | Contains some concrete code examples (grep commands, MCP tool calls with JSON), but much of the content is descriptive rather than instructive ('Track import statements', 'Identify external package dependencies'). The MCP tool integration examples are somewhat concrete but use pseudocode-like JavaScript syntax rather than clearly executable commands. | 2 / 3 |
Workflow Clarity | There is a numbered research methodology (Information Gathering → Pattern Analysis → Dependency Analysis → Documentation Mining) and a 'Broad to Narrow' search strategy, but there are no validation checkpoints, no error recovery steps, and no clear feedback loops. For a research agent that could produce incorrect findings, verification steps are notably absent. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files. All content is inline including detailed YAML output templates, multiple code blocks, MCP integration examples, and collaboration guidelines. Much of this could be split into separate reference files (e.g., MCP integration guide, output format reference). | 1 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
ccb062f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.