CtrlK
BlogDocsLog inGet started
Tessl Logo

craft-discovery-synthesis

Take raw user interview notes or feedback and extract themes and insights. Use when synthesizing qualitative data from interviews, surveys, support tickets, or feedback.

69

Quality

62%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./product-skills/skills/craft-discovery-synthesis/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description that clearly communicates both what the skill does and when to use it, with good natural trigger terms covering multiple input types. Its main weakness is that the 'what' portion could be more specific about the concrete actions and outputs beyond 'extract themes and insights'. Overall it performs well for skill selection purposes.

Suggestions

Expand the capability description with more specific actions, e.g., 'extract themes, identify patterns, code responses, and generate insight summaries' to improve specificity.

DimensionReasoningScore

Specificity

Names the domain (user research/feedback analysis) and some actions ('extract themes and insights'), but doesn't list multiple specific concrete actions. Could be more detailed about what specific outputs or processes are involved (e.g., affinity mapping, coding responses, generating summary reports).

2 / 3

Completeness

Clearly answers both 'what' (take raw user interview notes or feedback and extract themes and insights) and 'when' (Use when synthesizing qualitative data from interviews, surveys, support tickets, or feedback) with explicit trigger guidance.

3 / 3

Trigger Term Quality

Good coverage of natural terms users would say: 'interview notes', 'feedback', 'qualitative data', 'interviews', 'surveys', 'support tickets'. These are terms a user would naturally use when needing this skill.

3 / 3

Distinctiveness Conflict Risk

Has a clear niche focused on qualitative user research data synthesis. The specific mention of interviews, surveys, support tickets, and feedback extraction creates distinct triggers unlikely to conflict with general data analysis or other skills.

3 / 3

Total

11

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a prompt template wrapper with explanatory padding. It over-explains the purpose and context (which Claude already understands), lacks concrete examples of input/output to anchor quality, and provides no validation mechanism for verifying synthesis accuracy. The structured output format (themes, pain points, etc.) is the strongest element but would benefit from a concrete example showing what good output looks like.

Suggestions

Remove the introductory paragraph and 'Tips' section — Claude doesn't need to be told what synthesis is or that messy input is acceptable. This would significantly improve conciseness.

Add a concrete example showing sample raw interview notes as input and the expected synthesized output, so Claude has a quality anchor to work from.

Add a validation step: after generating the synthesis, verify each theme is supported by at least 2 distinct data points, and flag any single-source themes explicitly.

Remove the 'You are an experienced product manager' framing from the prompt template — Claude doesn't need role-play instructions in a skill file.

DimensionReasoningScore

Conciseness

The opening paragraph explains what synthesis is and why you'd do it — Claude already knows this. The 'Tips' section explains obvious things like 'messy is fine.' The prompt template itself contains unnecessary framing ('You are an experienced product manager') and verbose instructions that Claude inherently understands. Significant token waste throughout.

1 / 3

Actionability

The prompt template provides a concrete output structure (Key Themes, Pain Points, etc.) which gives some actionable guidance. However, it's essentially a prompt template rather than executable steps — there's no example input/output showing what a good synthesis looks like, no concrete criteria for ranking severity, and no sample output format to anchor quality.

2 / 3

Workflow Clarity

The numbered output sections (Key Themes, Pain Points, etc.) provide a clear structure for the synthesis output. However, there's no validation step — no guidance on how to verify the synthesis is accurate, no feedback loop for checking themes against raw data, and no process for handling edge cases like contradictory data or very thin datasets.

2 / 3

Progressive Disclosure

The content is organized into sections (Prompt Template, Tips) which provides some structure. However, for a skill with no bundle files, the content is somewhat monolithic — the prompt template is long and inline. An example input/output pair could be separated or the tips could be integrated more efficiently. Not terrible but not optimally organized.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
amplitude/builder-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.