CtrlK
BlogDocsLog inGet started
Tessl Logo

market-research-reports

Generate comprehensive market research reports (50+ pages) in the style of top consulting firms (McKinsey, BCG, Gartner). Features professional LaTeX formatting, extensive visual generation with scientific-schematics and generate-image, deep integration with research-lookup for data gathering, and multi-framework strategic analysis including Porter Five Forces, PESTLE, SWOT, TAM/SAM/SOM, and BCG Matrix.

71

2.00x
Quality

58%

Does it follow best practices?

Impact

96%

2.00x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/market-research-reports/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong, highly specific description that clearly communicates what the skill does with concrete actions, named frameworks, and tool integrations. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know precisely when to select this skill over others. The trigger terms are naturally rich due to the domain-specific vocabulary included.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user requests market research, industry analysis, competitive landscape reports, or consulting-style strategy documents.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and outputs: generating 50+ page market research reports, LaTeX formatting, visual generation, data gathering via research-lookup, and names five specific strategic frameworks (Porter Five Forces, PESTLE, SWOT, TAM/SAM/SOM, BCG Matrix).

3 / 3

Completeness

The 'what' is thoroughly covered with specific capabilities and frameworks, but there is no explicit 'Use when...' clause or equivalent trigger guidance telling Claude when to select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2.

2 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'market research', 'consulting', 'McKinsey', 'BCG', 'Gartner', 'Porter Five Forces', 'PESTLE', 'SWOT', 'TAM/SAM/SOM', 'BCG Matrix', 'strategic analysis', and 'LaTeX'. These cover a wide range of terms a user requesting market research or strategy reports would naturally use.

3 / 3

Distinctiveness Conflict Risk

The description carves out a very clear niche: comprehensive consulting-style market research reports with specific frameworks, LaTeX formatting, and named tool integrations. This is highly unlikely to conflict with other skills due to its specificity around market research reports and consulting firm style.

3 / 3

Total

11

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive in scope but severely over-engineered for a SKILL.md file. The core problem is that it tries to be both a reference manual and an instruction set, resulting in massive verbosity that buries the actionable workflow. The detailed per-chapter breakdowns with 'Content Requirements,' 'Key Data Points,' and 'Required Visuals' tables should be in referenced files, not inline. The actual executable guidance (bash commands, LaTeX snippets, compilation steps) is solid but diluted by extensive explanatory content Claude doesn't need.

Suggestions

Move the detailed per-chapter content requirements (Chapters 1-11) into a referenced file like `references/report_structure_guide.md` and keep only a brief chapter listing in SKILL.md

Remove explanations of well-known frameworks (Porter's Five Forces, PESTLE, SWOT, TAM/SAM/SOM) — Claude knows these; just specify how to apply them in this context

Cut the 'When to Use This Skill' section entirely and trim the Overview to 3-4 lines — the skill name and description already convey this

Add explicit validation/feedback loops within the workflow (e.g., 'After research phase, verify data coverage for all 11 chapters before proceeding to visual generation')

DimensionReasoningScore

Conciseness

Extremely verbose at 500+ lines. Extensively explains concepts Claude already knows (what market research is, what PESTLE stands for, what TAM/SAM/SOM means, what Porter's Five Forces is). The 'When to Use This Skill' section, framework explanations, and writing guidelines are largely unnecessary padding. The report structure section reads like a textbook rather than actionable instructions.

1 / 3

Actionability

Provides concrete bash commands for visual generation and LaTeX compilation, plus executable LaTeX code examples for formatting. However, much of the content is descriptive rather than instructive (e.g., listing 'Content Requirements' and 'Key Data Points' per section without showing how to actually write them). The research-lookup examples use placeholder [MARKET] without showing complete executable workflows.

2 / 3

Workflow Clarity

The 5-phase workflow (Research → Analysis → Visuals → Writing → Compilation) is clearly sequenced with numbered steps, and includes a quality review checklist. However, validation checkpoints are weak — the compilation step lacks error handling guidance beyond the troubleshooting section, and there's no feedback loop for research quality or data gap resolution. The 'verify the report meets quality standards' checklist is post-hoc rather than integrated into the workflow.

2 / 3

Progressive Disclosure

References external files (references/report_structure_guide.md, assets/market_research.sty, scripts/generate_market_visuals.py, etc.) which is good structure, but no bundle files are provided to verify these exist. The SKILL.md itself is monolithic — the detailed per-chapter content requirements (Chapters 1-11 with subsections) should be in a referenced file rather than inline, as they account for roughly half the document's length.

2 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (906 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.