CtrlK
BlogDocsLog inGet started
Tessl Logo

deep-research

Generate format-controlled research reports with evidence tracking, citations, source governance, and multi-pass synthesis. This skill should be used when users request a research report, literature review, market or industry analysis, competitive landscape, policy or technical brief. Triggers: "帮我调研一下", "深度研究", "综述报告", "深入分析", "research this topic", "write a report on", "survey the literature on", "competitive analysis of", "技术选型分析", "竞品研究", "政策分析", "行业报告". V6 adds: source-type governance, AS_OF freshness checks, mandatory counter-review, and citation registry. V6.1 adds: source accessibility (circular verification forbidden, exclusive advantage encouraged).

81

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./deep-research/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that excels across all dimensions. It provides specific capabilities, comprehensive bilingual trigger terms, explicit 'when to use' guidance covering multiple use cases, and a distinctive niche around structured research report generation with advanced features like source governance and citation tracking. The versioning notes (V6, V6.1) add useful specificity about capabilities without being overly verbose.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'format-controlled research reports', 'evidence tracking', 'citations', 'source governance', 'multi-pass synthesis', 'source-type governance', 'AS_OF freshness checks', 'mandatory counter-review', 'citation registry', and 'circular verification forbidden'.

3 / 3

Completeness

Clearly answers both 'what' (generate format-controlled research reports with evidence tracking, citations, source governance, multi-pass synthesis) and 'when' (explicit 'Use when' clause listing specific request types like research report, literature review, market analysis, plus explicit trigger phrases).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms in both English and Chinese, including phrases users would actually say like 'research this topic', 'write a report on', 'competitive analysis of', '帮我调研一下', '深度研究', '综述报告', plus domain-specific terms like '技术选型分析', '竞品研究', '行业报告'.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche around structured research reports with specific features like source governance, citation registry, and counter-review. The bilingual trigger terms and specific report types (literature review, competitive landscape, policy brief) make it unlikely to conflict with general writing or analysis skills.

3 / 3

Total

12

/

12

Passed

Implementation

54%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a comprehensive research orchestration skill with excellent workflow clarity and progressive disclosure—the phased pipeline with validation checkpoints and well-organized reference files is a strength. However, it suffers significantly from verbosity and redundancy: the enterprise workflow is described twice, source governance rules are over-explained, and the 'Information Black Box' section is unnecessarily detailed. Actionability is moderate—structured but lacking concrete executable examples of tool usage.

Suggestions

Eliminate redundancy by removing the duplicate enterprise workflow description (it appears both in P0 and in the standalone 'Enterprise Research Mode' section). Consolidate into one location and reference it.

Cut the 'Information Black Box' section to 5-6 lines max—the DO/DON'T lists repeat rules already stated in Source Governance. Move detailed examples to a reference file if needed.

Add concrete, executable examples of actual tool invocations (e.g., real web_search calls, subagent dispatch syntax for the target platform) rather than pseudocode like 'SendMessage to: claim-validator'.

Remove explanatory text that Claude can infer (e.g., 'PDF (Portable Document Format)'-style explanations of what source types mean) and trust Claude to apply the classification tables directly.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~500+ lines with significant redundancy. Enterprise workflow is described in P0, then repeated in a separate 'Enterprise Research Mode' section. E3-E5 are detailed individually then summarized again as 'E3-E7'. Source governance tables, quality gates, and anti-patterns contain information Claude could infer. The 'Information Black Box' example is overly detailed for what is essentially 'report when no data is found.'

1 / 3

Actionability

The skill provides structured phases (P0-P7) with specific output formats, registry templates, and status report strings, which is good. However, most guidance is procedural description rather than executable code/commands. The subagent dispatch examples use pseudocode-like 'SendMessage to:' syntax without concrete implementation. Search query examples and actual tool invocations are absent.

2 / 3

Workflow Clarity

The multi-step workflow is clearly sequenced (P0→P7) with explicit validation checkpoints at each phase (status reports), quality gates with numeric thresholds, counter-review as mandatory with minimum issue count, P7 verification with spot-checks, and error recovery guidance (e.g., 're-examine if 0 found'). The enterprise pipeline also has L1/L2/L3 quality checks at stage transitions.

3 / 3

Progressive Disclosure

The skill has a clear overview structure with well-organized reference tables pointing to one-level-deep files (subagent_prompt.md, report_template_v6.md, enterprise_analysis_frameworks.md, etc.). References are grouped by context (Core V6, General, Enterprise) with 'When to Load' guidance. The main file serves as a coordinator overview without inlining the detailed reference content.

3 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (545 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
daymade/claude-code-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.