CtrlK
BlogDocsLog inGet started
Tessl Logo

review-mining

When the user wants to research customer pain points, complaints, or sentiment using review platforms like Trustpilot, G2, Capterra, or app stores. Also use when the user mentions "what are users saying", "competitor reviews", "pain points", or "voice of customer research".

73

Quality

67%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/review-mining/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

72%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description excels at trigger term coverage and distinctiveness, providing excellent guidance on when Claude should select this skill. However, it is notably weak on specifying what concrete actions the skill performs — it reads more like a trigger list than a capability description. Adding explicit actions (e.g., 'Scrapes and analyzes customer reviews, identifies recurring themes, summarizes sentiment') would significantly improve it.

Suggestions

Add concrete action verbs describing what the skill does, e.g., 'Scrapes customer reviews, analyzes sentiment patterns, identifies recurring pain points, and summarizes findings from review platforms.'

Restructure to lead with a clear 'what it does' statement before the 'Use when...' clause, following the pattern: capabilities first, then trigger guidance.

DimensionReasoningScore

Specificity

The description names the domain (customer review research) and mentions platforms (Trustpilot, G2, Capterra, app stores), but doesn't list specific concrete actions like 'scrape reviews', 'analyze sentiment', 'generate summary reports', or 'identify recurring themes'. The 'what it does' is implied rather than explicitly stated with concrete actions.

2 / 3

Completeness

The 'when' is very well covered with explicit trigger guidance and natural phrases. However, the 'what does this do' part is weak — it says 'research customer pain points, complaints, or sentiment' but doesn't describe the concrete actions or outputs the skill produces. The description is essentially all 'when' with minimal 'what'.

2 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms: 'customer pain points', 'complaints', 'sentiment', 'Trustpilot', 'G2', 'Capterra', 'app stores', 'what are users saying', 'competitor reviews', 'pain points', 'voice of customer research'. These are terms users would naturally use when requesting this type of research.

3 / 3

Distinctiveness Conflict Risk

The description carves out a clear niche around customer review research on specific platforms. The combination of named review platforms and specific use cases like 'voice of customer research' and 'competitor reviews' makes it highly distinctive and unlikely to conflict with other skills.

3 / 3

Total

10

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured instruction-only skill with a clear workflow and excellent output template. Its main weaknesses are the lack of concrete guidance on how to actually collect reviews (no tools, APIs, or specific methods mentioned) and the somewhat verbose inline presentation that could benefit from splitting reference material into separate files. The frameworks and best practices sections add genuine value but contribute to overall length.

Suggestions

Add specific guidance on how to actually collect reviews — mention whether Claude should use web search, specific URLs/API patterns, or ask the user to paste reviews, since 'gather 50-100 reviews per competitor' is vague without tooling context.

Consider splitting the output format template and the frameworks/best practices tables into separate reference files (e.g., REVIEW_TEMPLATE.md, PLATFORMS.md) and linking to them from the main skill.

Trim the 'Common mistakes' section — items like 'Mining once and never returning' are advice for the founder, not actionable instructions for Claude performing the task.

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some unnecessary elaboration. The 'Common mistakes' section and some of the framework tables add useful value, but sections like 'Context Required' and the detailed examples at the end could be tightened. The 'Related Skills' section is fine but the overall document is longer than it needs to be for an instruction-only skill.

2 / 3

Actionability

The workflow steps are clear and the output format template is concrete and copy-paste ready. However, this is a research/analysis skill with no executable code or specific tool commands — the guidance on how to actually 'collect reviews' (step 2) is vague. There's no mention of specific tools, APIs, or scraping methods to gather the 50-100 reviews per competitor. The actionability relies heavily on Claude already knowing how to access these platforms.

2 / 3

Workflow Clarity

The 7-step workflow is clearly sequenced and logically ordered from scoping through collection, analysis, and output generation. Each step has specific sub-criteria (e.g., severity levels, frequency tracking). While there aren't explicit validation checkpoints, this is a research/synthesis task rather than a destructive operation, so the sequential flow with clear deliverables at each stage is appropriate.

3 / 3

Progressive Disclosure

The content is well-organized with clear sections and headers, and references related skills at the end. However, the document is quite long (~120 lines of content) and could benefit from splitting the frameworks/best practices and the detailed output template into separate reference files. Everything is inline in a single document when some content could be progressively disclosed.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
shawnpang/startup-founder-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.