CtrlK
BlogDocsLog inGet started
Tessl Logo

diagnose-retention

Diagnose where and why users churn, identify natural usage frequency, build cohort retention curves, and find the behaviors that drive long-term retention. Use when a PM needs to understand churn, improve retention curves, or identify what makes users stick.

78

Quality

73%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./growth-skills/skills/diagnose-retention/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly articulates specific capabilities in the user retention/churn analysis domain. It follows the recommended pattern with concrete actions followed by an explicit 'Use when...' clause with natural trigger terms. The description uses proper third-person voice and is concise without being vague.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Diagnose where and why users churn', 'identify natural usage frequency', 'build cohort retention curves', and 'find the behaviors that drive long-term retention'. These are clear, actionable capabilities.

3 / 3

Completeness

Clearly answers both what ('Diagnose where and why users churn, identify natural usage frequency, build cohort retention curves, find behaviors that drive long-term retention') and when ('Use when a PM needs to understand churn, improve retention curves, or identify what makes users stick').

3 / 3

Trigger Term Quality

Includes strong natural keywords a PM would use: 'churn', 'retention curves', 'cohort', 'users stick', 'usage frequency'. These are terms product managers naturally use when discussing retention problems.

3 / 3

Distinctiveness Conflict Risk

Occupies a clear niche around user retention and churn analysis. The specific domain terms like 'cohort retention curves', 'churn', and 'usage frequency' make it unlikely to conflict with general analytics or other product skills.

3 / 3

Total

12

/

12

Passed

Implementation

47%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a thorough and well-structured retention diagnosis framework with clear step-by-step workflow and good cross-references to related skills. However, it is significantly over-verbose, explaining many PM and retention concepts that Claude already knows, and lacks concrete executable artifacts (queries, code, API calls) that would make it truly actionable rather than a detailed prompt template.

Suggestions

Cut the introductory paragraph explaining why retention matters and the extensive concept explanations (habit loops, benchmark definitions) — Claude knows these. Focus on the specific analytical steps and decision criteria.

Add concrete executable examples: SQL queries for cohort retention curves, Amplitude API calls for segmentation, or Python snippets for calculating retention-predictive behavior correlations.

Move the benchmark data table and anti-plays section into separate reference files to reduce the main skill's token footprint and improve progressive disclosure.

Tighten the prompt template by removing redundant explanations within each step — e.g., Step 3's explanation of what a false friend is can be reduced to just the example.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~200+ lines. It explains retention concepts Claude already understands (what retention is, why it matters, what a habit loop is, what benchmarks look like). The opening paragraph about retention being 'the single most important growth lever' and the extensive anti-plays section explain PM concepts Claude knows well. Much of this could be cut by 50%+ without losing actionability.

1 / 3

Actionability

The skill provides a structured analytical framework with specific steps and good examples of what to look for (e.g., 'Users who did X in their first 14 days have D60 retention of 72% vs. 31%'). However, it contains no executable code, no concrete SQL queries, no Amplitude API calls, and no actual data manipulation examples — it's a detailed prompt template with analytical guidance but lacks copy-paste-ready artifacts.

2 / 3

Workflow Clarity

The 5-step workflow is clearly sequenced and logically ordered: establish baseline → segment → identify predictive behaviors → diagnose churn moments → build plan. Each step has explicit sub-steps, and the output format section serves as a validation checklist. The workflow includes feedback loops (e.g., if early churn dominates, redirect to diagnose-activation).

3 / 3

Progressive Disclosure

The skill references other skills (create-chart, build-metric-tree, diagnose-activation, discover-opportunities, craft-experiment-design) in the Tips section, which is good navigation. However, the main body is a monolithic wall of text that could benefit from splitting detailed benchmark data, anti-plays, and the prompt template into separate reference files. Everything is inline in one large file.

2 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
amplitude/builder-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.