Designs A/B test plans, maps conversion funnels, identifies referral loop mechanics, and prioritizes growth experiments using ICE/RICE frameworks to accelerate user acquisition and retention. Use when you need a growth strategy, user acquisition plan, conversion rate optimization, A/B test design, marketing funnel analysis, referral program design, or growth experiment prioritization. Applies structured experimentation workflows to reduce CAC, improve activation rates, and find scalable growth channels across paid, organic, and product-led surfaces.
90
88%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Running a Growth Experiment → Follow the Growth Experiment Workflow below.
Diagnosing a Leaky Funnel → Follow the Funnel Analysis Workflow below.
Designing a Referral Program → Use the Referral Program Checklist below.
Prioritizing Growth Ideas → Use the ICE/RICE Scoring Template below.
Use this end-to-end sequence when designing and executing a growth experiment.
Fill in every field before proceeding:
Experiment Brief
────────────────────────────────────────
Hypothesis: If [change], then [metric] will [increase/decrease] by [X]%
because [reason backed by data or user insight].
Primary Metric: [Single measurable KPI — e.g., activation rate, Day-7 retention]
Secondary Metrics: [Supporting signals to monitor for unexpected effects]
Segments: [Who sees each variant — define inclusion/exclusion criteria]
Sample Size: [Minimum detectable effect, baseline rate, α=0.05, power=0.8
→ use: n ≈ 16σ²/δ² for two-sample proportion test]
Duration: [Minimum 1 full business cycle; avoid stopping early]
Rollback Plan: [Condition that triggers immediate halt + how to revert]
Owner: [Name] | Launch Date: [Date] | Review Date: [Date]p < 0.05 AND lift > minimum detectable effect?
├─ YES → Ship to 100%; log as Win; update growth model
└─ NO → p ≥ 0.05?
├─ YES (inconclusive) → Redesign with larger sample or clearer hypothesis
└─ NO (negative result) → Log as Loss; document why; do not re-run without new insightDefine each stage with a measurable event:
| Stage | Event Name | Definition |
|---|---|---|
| Acquisition | session_start | First visit from any channel |
| Activation | aha_moment | Core value action (define per product) |
| Retention | D7_return | Session on Day 7 ± 1 |
| Revenue | first_payment | First completed transaction |
| Referral | invite_sent | Referral link generated and sent |
Conversion Rate (stage N→N+1) = Users completing N+1 / Users completing N × 100
Drop-off Rate = 100 − Conversion Rate
Revenue Leak = Drop-off Rate × Avg. LTV per userFor the priority stage, gather:
Use this checklist before building or recommending a referral program.
Mechanics
Viral Coefficient Calculation
K-factor = i × c
i = average invites sent per existing user
c = conversion rate of invitees to active users
K > 1.0 → viral growth (each user generates >1 new user)
K = 0.5–1.0 → amplification (referrals meaningfully supplement other channels)
K < 0.5 → referral program is decorative; fix activation before optimizing referralsValidation Checkpoints
Use this to prioritize a backlog of growth ideas before committing to experiments.
ICE = (Impact + Confidence + Ease) / 3
Impact: 1–10 How much will this move the North Star Metric?
Confidence: 1–10 How certain are we this will work? (10 = proven elsewhere)
Ease: 1–10 How fast/cheap to implement? (10 = < 1 day, no eng needed)RICE = (Reach × Impact × Confidence) / Effort
Reach: Users/month affected
Impact: 0.25 / 0.5 / 1 / 2 / 3 (massive) — impact on primary metric per user
Confidence: % expressed as decimal (0.8 = 80% confident)
Effort: Person-weeks to design, build, and launchRank ideas by RICE score. Assign top 3 to active experiment slots; park the rest.
Default to shortest Time to Signal when CAC data is absent. Use paid channels to validate demand before investing in SEO or partnerships. Key decision rule: if CAC payback period exceeds 12 months, deprioritize that channel regardless of scalability.
| Channel | Relative CAC | Time to Signal | Scalability |
|---|---|---|---|
| Paid Search | $ | 1–2 weeks | High |
| Referral/Viral | $ | 2–4 weeks | Medium |
| Product-Led (PLG) | $ | 2–4 weeks | Very High |
| SEO/Content | $$ | 3–6 months | High |
| PR/Earned Media | $$ | Variable | Low |
| Partnerships | $$$ | 1–3 months | Medium |
The NSM must reflect value delivered to the user (not vanity metrics like total signups), correlate with long-term revenue retention, and resolve to a single number. If stakeholders debate two metrics, run a 30-day cohort correlation and pick the stronger predictor of retained revenue.
New Users/Month = (Organic + Paid + Referral + SEO) × Activation Rate
Net Growth = New Users − Churned Users
Revenue Growth = Net Growth × Avg. Revenue per Activated UserPopulate this model with actuals monthly. The input growing fastest becomes the focus channel for the next quarter.
010799b
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.