CtrlK
BlogDocsLog inGet started
Tessl Logo

spec-tech-research

Advanced technical research command with extended thinking modes and MCP integration for comprehensive analysis

28

Quality

11%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.claude/skills/spec-tech-research/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is almost entirely composed of vague buzzwords and technical jargon without specifying concrete actions, use cases, or trigger conditions. It fails to communicate what the skill actually does or when Claude should select it, making it indistinguishable from many other potential skills.

Suggestions

Replace abstract terms like 'comprehensive analysis' and 'MCP integration' with specific concrete actions the skill performs (e.g., 'Searches academic papers, synthesizes findings across sources, generates literature reviews').

Add an explicit 'Use when...' clause with natural trigger terms a user would actually say (e.g., 'Use when the user asks for in-depth research, literature review, technical deep-dive, or multi-source analysis').

Define the specific domain or niche this skill covers to distinguish it from general research or analysis skills (e.g., specify what kind of technical research and what MCP tools are leveraged).

DimensionReasoningScore

Specificity

The description uses vague, buzzword-heavy language like 'advanced technical research', 'extended thinking modes', 'MCP integration', and 'comprehensive analysis' without listing any concrete actions the skill performs.

1 / 3

Completeness

The description vaguely hints at 'what' (technical research and analysis) but provides no explicit 'when' clause or trigger guidance. Both dimensions are very weak.

1 / 3

Trigger Term Quality

The terms used ('extended thinking modes', 'MCP integration', 'comprehensive analysis') are technical jargon that users would not naturally say. 'Research' is somewhat natural but too generic to be a useful trigger.

1 / 3

Distinctiveness Conflict Risk

Terms like 'research', 'analysis', and 'comprehensive' are extremely generic and would overlap with virtually any analytical or research-oriented skill. Nothing carves out a clear niche.

1 / 3

Total

4

/

12

Passed

Implementation

22%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads more like a CLI reference card for a hypothetical command than actionable instructions for Claude. It lacks implementation details showing how the research process actually works, has no workflow sequencing or validation steps, and the 'thinking modes' with token budgets are presented without explaining the mechanism. The extensive option tables and examples give an appearance of completeness but don't translate into executable guidance.

Suggestions

Add a concrete step-by-step workflow describing what Claude should actually do when this command is invoked (e.g., 1. Parse topic, 2. Select MCP tools based on options, 3. Execute research queries, 4. Synthesize findings, 5. Validate confidence scores, 6. Format output).

Provide executable examples of how MCP tools are actually called during research—show the tool invocation patterns rather than just listing tool names.

Remove or drastically shorten the output structure template since Claude already knows how to format markdown reports; replace with specific formatting rules unique to this skill.

Add validation checkpoints (e.g., verify source reliability before including citations, cross-check findings across multiple MCP sources) to make the research process robust.

DimensionReasoningScore

Conciseness

The content is reasonably organized with tables, but includes some unnecessary elements like explaining what each thinking mode's 'depth' label means (Standard/Enhanced/Structured/Maximum are vague labels that don't add value). The output structure template is generic boilerplate Claude already knows how to produce. Some tables could be tighter.

2 / 3

Actionability

Despite having many CLI examples, this skill describes a custom slash command (/spec:tech-research) without providing any implementation details—no code showing how the command is parsed, how thinking modes are activated, how MCP tools are invoked, or how the research workflow actually executes. The examples are invocations of a command that doesn't exist natively, with no executable backing.

1 / 3

Workflow Clarity

There is no clear multi-step workflow describing what happens when the command is invoked. The skill lists options and templates but never sequences the research process (e.g., gather sources → analyze → validate findings → format output). There are no validation checkpoints or feedback loops for what is described as 'comprehensive analysis.'

1 / 3

Progressive Disclosure

The content is structured with clear sections and tables, which aids scanning. However, everything is in one file with no references to deeper documentation, and some sections (like the full output structure template and extensive option tables) could be split out. The integration section at the end hints at related commands but doesn't link to them.

2 / 3

Total

6

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
sc30gsw/claude-code-customes
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.