Advanced technical research command with extended thinking modes and MCP integration for comprehensive analysis
Install with Tessl CLI
npx tessl i github:sc30gsw/claude-code-customes --skill spec-tech-research45
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is severely lacking across all dimensions. It relies entirely on vague buzzwords ('advanced', 'comprehensive', 'extended thinking modes') without specifying what the skill actually does or when it should be used. The technical jargon would not match natural user queries, making skill selection unreliable.
Suggestions
Replace abstract terms with concrete actions (e.g., 'Searches academic papers, synthesizes findings, generates citations' instead of 'comprehensive analysis')
Add an explicit 'Use when...' clause with natural trigger terms users would say (e.g., 'Use when the user asks for deep research, literature review, or technical investigation')
Specify what 'MCP integration' actually enables in user-facing terms, or remove the jargon entirely
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague, abstract language like 'advanced technical research' and 'comprehensive analysis' without listing any concrete actions. No specific capabilities are enumerated. | 1 / 3 |
Completeness | Missing both clear 'what' (no specific actions listed) and 'when' (no 'Use when...' clause or explicit trigger guidance). The description is entirely abstract. | 1 / 3 |
Trigger Term Quality | Contains technical jargon ('extended thinking modes', 'MCP integration') that users would not naturally say. Missing natural trigger terms like 'research', 'analyze', or specific domains. | 1 / 3 |
Distinctiveness Conflict Risk | 'Technical research' and 'comprehensive analysis' are extremely generic and could conflict with virtually any research, analysis, or documentation skill. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
52%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill excels at concise documentation of options and modes using efficient table formats. However, it functions more as a reference card than an actionable skill - it tells Claude what options exist but not how to actually perform technical research, in what order to use tools, or how to validate findings. The gap between 'here are your options' and 'here's how to conduct research' is significant.
Suggestions
Add a workflow section explaining the research process: e.g., 1) Clarify scope, 2) Use Serena for codebase context, 3) Use Context7 for library docs, 4) Synthesize findings, 5) Validate confidence scores
Define what Claude should actually do when the command is invoked - the current content describes inputs/outputs but not the research methodology
Add validation checkpoints: how to verify research quality, when to increase depth, how to handle low-confidence findings
Include a concrete example showing the full research output for one topic, not just the invocation syntax
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is lean and efficient, using tables for options and modes rather than verbose explanations. It assumes Claude understands CLI conventions, MCP tools, and technical concepts without over-explaining. | 3 / 3 |
Actionability | Provides concrete CLI examples and option tables, but the actual implementation of what happens when the command runs is unclear. The examples show invocation syntax but not what Claude should actually do to perform the research or generate outputs. | 2 / 3 |
Workflow Clarity | No workflow is defined for how to actually conduct the research. The skill lists tools and options but doesn't explain the sequence of steps, when to use which tools, or how to validate research quality. For a complex multi-tool research task, this is a significant gap. | 1 / 3 |
Progressive Disclosure | Content is well-organized with clear sections and tables, but everything is in one file. The 'Integration' section hints at related commands but doesn't link to documentation. For a comprehensive research skill, separating templates or detailed MCP usage into referenced files would improve navigation. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.