When the user wants to reduce churn, build expansion revenue, automate customer success, or optimize net revenue retention. Also use when the user mentions 'churn,' 'retention,' 'expansion revenue,' 'upsell,' 'NRR,' 'net revenue retention,' 'customer success,' 'land and expand,' 'closed-lost,' or 'renewal.' This skill covers expansion and retention systems from usage triggers through automated customer success. Do NOT use for technical implementation, code review, or software architecture.
67
58%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./packages/skills-catalog/skills/(gtm)/expansion-retention/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description excels at trigger term coverage and completeness, with a strong 'when to use' clause and helpful negative boundaries. Its main weakness is that the 'what it does' portion describes goals and domains rather than concrete actions the skill performs. Adding specific deliverables or outputs would strengthen the specificity dimension.
Suggestions
Add concrete actions describing what the skill produces, e.g., 'Designs retention playbooks, builds expansion revenue models, creates customer health scoring frameworks, and maps upsell trigger workflows.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (customer success/retention) and mentions some actions like 'reduce churn,' 'build expansion revenue,' 'automate customer success,' and 'optimize net revenue retention,' but these are more like goals than concrete specific actions. It lacks detail on what the skill actually does (e.g., 'generates playbooks,' 'creates dashboards,' 'designs workflows'). | 2 / 3 |
Completeness | The description explicitly answers both 'what' (expansion and retention systems from usage triggers through automated customer success) and 'when' (detailed trigger conditions with explicit 'Use when' and 'Do NOT use' clauses). The negative boundary ('Do NOT use for technical implementation, code review, or software architecture') adds further clarity. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'churn,' 'retention,' 'expansion revenue,' 'upsell,' 'NRR,' 'net revenue retention,' 'customer success,' 'land and expand,' 'closed-lost,' 'renewal.' These are all terms a user would naturally use when seeking help in this domain. | 3 / 3 |
Distinctiveness Conflict Risk | The description carves out a clear niche in customer success and revenue retention, with explicit exclusions for technical/code tasks. The trigger terms are domain-specific (NRR, churn, upsell, land and expand) and unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is comprehensive in coverage but severely over-engineered for a SKILL.md file. It reads more like a complete SaaS playbook or textbook chapter than a concise skill instruction. The massive amount of benchmark data, scoring matrices, and general SaaS knowledge (NRR formulas, health score concepts, onboarding best practices) represents content Claude already knows or could generate, wasting significant token budget. The content would benefit enormously from being restructured as a brief overview with references to detailed sub-files.
Suggestions
Reduce the SKILL.md to a 50-80 line overview covering the key decision framework (NRR improvement tiers) and discovery questions, then move detailed tables (benchmarks, scoring models, trigger matrices) into separate referenced files like HEALTH-SCORING.md, EXPANSION-TRIGGERS.md, CLOSED-LOST.md, etc.
Remove general SaaS knowledge that Claude already knows (what NRR is, why onboarding matters, basic churn concepts) and focus only on the specific frameworks, thresholds, and decision trees that represent unique methodology.
Add validation/verification steps to key workflows - e.g., after implementing health scores, backtest against known churned accounts; after setting PQA thresholds, review conversion rates to calibrate; after running save plays, measure success rates against benchmarks.
Replace the benchmark tables with ranges or remove them entirely - specific numbers like '38% faster revenue growth' and '2025-2026 benchmarks' are time-sensitive claims without sources that will become stale and may be inaccurate.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | This is an extremely long skill (~500+ lines) with extensive benchmark tables, multiple scoring matrices, and detailed frameworks that Claude could generate on its own. Much of this is general SaaS knowledge (NRR formulas, onboarding best practices, health score concepts) that doesn't need to be spelled out. The sheer volume of tables with specific numbers (many of which are stated as '2025-2026 benchmarks' without sourcing) consumes enormous token budget. | 1 / 3 |
Actionability | The skill provides concrete frameworks, scoring models, and specific trigger-action tables that are reasonably actionable. However, it's strategic guidance rather than executable code/commands - there are no actual implementation artifacts, templates, or copy-paste-ready outputs. The 'Examples' section shows intent-to-result mappings but doesn't show the actual detailed output Claude should produce. | 2 / 3 |
Workflow Clarity | The skill has clear sequences for some processes (onboarding timeline, renewal timeline, closed-lost re-engagement phases) but lacks validation checkpoints. There's no explicit verification step - for example, after implementing health scores, there's no 'validate accuracy by backtesting against known churned accounts' step. The NRR improvement decision framework provides good branching logic but most workflows are lists without feedback loops. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of content with 9 major sections all inline, totaling hundreds of lines. The benchmark tables, scoring models, and detailed playbooks should be split into separate reference files. The 'Related Skills' section at the end references other skills but the body itself has no references to supplementary files - everything is crammed into one document. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
906a57d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.