Pull Intercom tickets and Slack support messages from the past 7 days, classify each signal, enrich with CRM data (ARR, plan, renewal), score by customer value and churn risk, and output a tiered priority report saved to Drive. Use when you need a fast, data-driven view of what support signals matter most.
79
75%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./analytics-skills/skills/support-feedback-prioritization/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong, well-crafted description that clearly articulates a specific multi-step workflow involving concrete tools (Intercom, Slack, CRM, Drive) and actions (pull, classify, enrich, score, output). It includes an explicit 'Use when' clause and uses domain-specific trigger terms that users would naturally employ. The only minor weakness is that the 'Use when' clause is slightly generic ('support signals matter most') compared to the specificity of the rest of the description, but it still effectively communicates the trigger context.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: pull Intercom tickets and Slack messages, classify signals, enrich with CRM data (ARR, plan, renewal), score by customer value and churn risk, output a tiered priority report saved to Drive. | 3 / 3 |
Completeness | Clearly answers both 'what' (pull tickets, classify, enrich, score, output report) and 'when' ('Use when you need a fast, data-driven view of what support signals matter most'). The 'Use when' clause is explicit. | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'Intercom tickets', 'Slack support messages', 'CRM data', 'ARR', 'churn risk', 'priority report', 'customer value', 'Drive'. These cover the domain well and match how users would describe this workflow. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a very specific niche combining Intercom, Slack support, CRM enrichment, churn risk scoring, and Drive output. This is unlikely to conflict with other skills due to its unique combination of integrations and workflow. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a well-structured prompt template for support feedback prioritization with clear steps and useful placeholder variables. However, it lacks executable specificity — there are no concrete MCP tool calls, the scoring model is underspecified, and validation checkpoints are absent for a workflow involving four external integrations. The content reads more like a prompt recipe than a technical skill with actionable, copy-paste-ready guidance.
Suggestions
Define the base scoring formula explicitly (e.g., base_score = ARR_weight + category_weight) so the multipliers in Step 4 are actionable rather than abstract.
Add concrete MCP tool invocation examples (e.g., specific Intercom MCP query syntax, Slack MCP channel listing commands) so Claude knows exactly which tools to call.
Add a validation checkpoint after Step 1 (e.g., 'Confirm signal count and date range before proceeding') and after Step 3 (e.g., 'Report how many customers were matched vs unmatched in CRM') to catch integration failures early.
Link or provide the referenced 'analyze-feedback' skill and consider extracting the scoring rubric into a separate reference file for cleaner progressive disclosure.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The introductory paragraph restates what the prompt template already explains ('reads all of it, classifies every signal, enriches each item with ARR'). The Tips section adds useful context but the overall framing could be tighter. Some redundancy between the description and the body. | 2 / 3 |
Actionability | The skill provides a structured prompt template with clear steps and placeholder variables, but it lacks executable code or concrete MCP tool invocations. The scoring model in Step 4 references 'multiply score by ×3' without defining the base score or formula, making it ambiguous. There are no actual API calls, tool names, or command examples — it's a prompt template rather than executable guidance. | 2 / 3 |
Workflow Clarity | The five steps are clearly sequenced and logically ordered, and there's a graceful handling of CRM-not-found cases. However, there are no validation checkpoints — no step to verify data completeness, confirm MCP connections are working, or validate the output before saving to Drive. For a multi-step workflow involving multiple external integrations, this is a notable gap. | 2 / 3 |
Progressive Disclosure | The content is reasonably organized with sections (Prompt Template, Setup, Placeholders, Tips), but everything is in a single file with no references to supporting materials. The prompt template itself is quite long and could benefit from being separated or having the scoring rubric in a reference file. The mention of 'analyze-feedback' skill is a nice cross-reference but isn't linked. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
221ffaa
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.