CtrlK
BlogDocsLog inGet started
Tessl Logo

feedback-synthesizer

Expert in collecting, analyzing, and synthesizing user feedback from multiple channels to extract actionable product insights. Transforms qualitative feedback into quantitative priorities and strategic recommendations.

32

Quality

16%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./product-feedback-synthesizer/skills/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description establishes a clear domain (user feedback analysis for product insights) but relies on abstract, consultant-style language rather than concrete actions. It lacks explicit trigger guidance ('Use when...') which is critical for skill selection, and misses common natural language variations users would employ when requesting this type of work.

Suggestions

Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user asks to analyze customer feedback, survey responses, NPS scores, feature requests, support tickets, or app reviews.'

Replace abstract phrases like 'synthesizing user feedback' with concrete actions such as 'categorize feedback by theme, score sentiment, rank feature requests by frequency, generate priority matrices.'

Include common natural language variations users might use: 'customer feedback', 'survey results', 'feature requests', 'reviews', 'NPS', 'CSAT', 'support tickets', 'Voice of Customer'.

DimensionReasoningScore

Specificity

Names the domain (user feedback analysis) and some actions (collecting, analyzing, synthesizing, transforming qualitative to quantitative), but these are fairly high-level and not as concrete as listing specific discrete actions like 'categorize NPS responses, tag sentiment, generate priority matrices.'

2 / 3

Completeness

Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' portion is also somewhat vague, warranting a score of 1.

1 / 3

Trigger Term Quality

Includes some relevant terms like 'user feedback', 'product insights', and 'qualitative feedback', but misses many natural variations users might say such as 'customer feedback', 'survey results', 'feature requests', 'NPS', 'reviews', 'support tickets', or 'feedback analysis'.

2 / 3

Distinctiveness Conflict Risk

The focus on user feedback and product insights provides some specificity, but terms like 'analyzing', 'synthesizing', and 'strategic recommendations' are broad enough to overlap with general data analysis or product strategy skills.

2 / 3

Total

7

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a capability description or role specification rather than actionable instructions. It contains no concrete examples, templates, code, or executable guidance—just extensive lists of abstract capabilities and metrics. The content would need a fundamental restructuring to be useful as a skill, replacing descriptive bullet points with concrete workflows, output templates, and examples.

Suggestions

Replace abstract capability lists with concrete, executable workflows—e.g., provide a specific template for how to structure a feedback synthesis report with example input and output.

Add at least one complete worked example showing how to take raw feedback data and produce a prioritized insight report, including the actual output format.

Remove the 'Role Definition', 'Core Capabilities', 'Specialized Skills', 'Decision Framework', and 'Success Metrics' sections entirely—these describe what the agent is rather than instructing it on what to do.

Add validation checkpoints to the Processing Pipeline, such as specific criteria for when categorization is complete or how to verify sentiment analysis accuracy before proceeding to synthesis.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive bullet-point lists that describe capabilities Claude already possesses. The content reads like a job description or marketing document rather than actionable instructions. Sections like 'Core Capabilities', 'Specialized Skills', 'Success Metrics', and 'Decision Framework' are padded with unnecessary context that doesn't teach Claude anything new.

1 / 3

Actionability

The entire skill is abstract description with zero concrete code, commands, templates, or executable examples. There are no actual output formats, no sample analyses, no specific prompts or workflows to follow. Phrases like 'Automated collection from multiple sources with API integration' describe rather than instruct.

1 / 3

Workflow Clarity

While there is a numbered 'Processing Pipeline' (5 steps), the steps are vague descriptions without any validation checkpoints, concrete actions, or error recovery. 'Data Ingestion: Automated collection from multiple sources with API integration' gives no actionable guidance on what to actually do. No feedback loops or verification steps are present.

1 / 3

Progressive Disclosure

The content is a monolithic wall of bullet points with no references to external files and no clear navigation structure. All content is inline regardless of depth or relevance. There's no quick-start section or hierarchy that would help Claude find the right information quickly.

1 / 3

Total

4

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
OpenRoster-ai/awesome-openroster
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.