CtrlK
BlogDocsLog inGet started
Tessl Logo

research

Deep research into technical solutions by searching the web, examining GitHub repos, and gathering evidence. Use when the user explicitly says "use the research skill", "use a research agent", or asks for deep/thorough research into implementation options or technologies.

73

2.00x
Quality

70%

Does it follow best practices?

Impact

62%

2.00x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/toolkit/skills/research/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured skill description that clearly communicates both purpose and activation triggers. The explicit invocation requirements ('use the research skill', 'research agent') reduce conflict risk significantly. The main weakness is that the capability description could be more specific about concrete outputs or deliverables beyond 'gathering evidence'.

Suggestions

Add more specific output descriptions, e.g., 'produces comparison tables, architecture recommendations, and evidence-backed summaries' to strengthen specificity.

DimensionReasoningScore

Specificity

Names the domain (technical research) and some actions (searching the web, examining GitHub repos, gathering evidence), but the actions are somewhat general and could be more concrete about specific outputs or deliverables.

2 / 3

Completeness

Clearly answers both 'what' (deep research into technical solutions by searching the web, examining GitHub repos, gathering evidence) and 'when' (explicit 'Use when...' clause with specific trigger phrases and scenarios).

3 / 3

Trigger Term Quality

Includes strong natural trigger terms: 'research skill', 'research agent', 'deep research', 'thorough research', 'implementation options', 'technologies'. These are terms users would naturally say when requesting this kind of work.

3 / 3

Distinctiveness Conflict Risk

The description carves out a clear niche by requiring explicit invocation ('use the research skill', 'use a research agent') and focusing specifically on deep technical research with web/GitHub sources, making it unlikely to conflict with general coding or documentation skills.

3 / 3

Total

11

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a moderately well-structured research skill that provides a clear process outline and useful output template. However, it leans toward describing what Claude should already know (how to examine repos, what to look for in documentation) rather than adding novel, specific constraints. The workflow lacks validation checkpoints and the actionability could be improved with more concrete, executable examples rather than procedural descriptions.

Suggestions

Remove guidance Claude already knows (e.g., 'Look for: README documentation, Code examples, Architecture patterns') and focus on project-specific constraints like minimum evidence thresholds and output format requirements.

Add an explicit validation checkpoint before presenting findings, e.g., a checklist to verify minimum source count, recency of sources, and evidence quality before generating the final output.

Make the example usage more concrete by showing actual expected output rather than just listing the steps abstractly (e.g., show a filled-in research comparison for the terminal recording example).

DimensionReasoningScore

Conciseness

Mostly efficient but includes some unnecessary explanation. Steps like 'Handle Blocked Content' with example prompts and the general description of what to look for in GitHub repos (README documentation, code examples) are things Claude already knows. The output format template is useful but slightly verbose.

2 / 3

Actionability

Provides some concrete commands (git clone, mkdir) and a clear output template, but much of the guidance is procedural description rather than executable. The 'Examine GitHub Repositories' and 'Evidence Requirements' sections describe what to do abstractly rather than giving specific, copy-paste-ready workflows.

2 / 3

Workflow Clarity

Steps are numbered and sequenced clearly (search → examine → handle blocks → store → verify evidence), but there are no validation checkpoints or feedback loops. There's no explicit step to verify research quality before presenting findings, and no mechanism to retry or deepen research if initial results are insufficient beyond a vague 'ask for guidance' prompt.

2 / 3

Progressive Disclosure

Content is reasonably structured with clear sections and headers, but everything is inline in a single file. The output format template and example usage could potentially be split out. For a skill of this length (~80 lines of content), the organization is adequate but not exemplary—the example at the end feels tacked on rather than integrated.

2 / 3

Total

8

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
dwmkerr/claude-toolkit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.