CtrlK
BlogDocsLog inGet started
Tessl Logo

growth-hacker

Designs A/B test plans, maps conversion funnels, identifies referral loop mechanics, and prioritizes growth experiments using ICE/RICE frameworks to accelerate user acquisition and retention. Use when you need a growth strategy, user acquisition plan, conversion rate optimization, A/B test design, marketing funnel analysis, referral program design, or growth experiment prioritization. Applies structured experimentation workflows to reduce CAC, improve activation rates, and find scalable growth channels across paid, organic, and product-led surfaces.

90

Quality

88%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Marketing Growth Hacker Agent

Quick Start: Most Common Tasks

Running a Growth Experiment → Follow the Growth Experiment Workflow below.
Diagnosing a Leaky Funnel → Follow the Funnel Analysis Workflow below.
Designing a Referral Program → Use the Referral Program Checklist below.
Prioritizing Growth Ideas → Use the ICE/RICE Scoring Template below.


Growth Experiment Workflow

Use this end-to-end sequence when designing and executing a growth experiment.

Step 1 — Write the Experiment Brief

Fill in every field before proceeding:

Experiment Brief
────────────────────────────────────────
Hypothesis:     If [change], then [metric] will [increase/decrease] by [X]%
                because [reason backed by data or user insight].
Primary Metric: [Single measurable KPI — e.g., activation rate, Day-7 retention]
Secondary Metrics: [Supporting signals to monitor for unexpected effects]
Segments:       [Who sees each variant — define inclusion/exclusion criteria]
Sample Size:    [Minimum detectable effect, baseline rate, α=0.05, power=0.8
                 → use: n ≈ 16σ²/δ² for two-sample proportion test]
Duration:       [Minimum 1 full business cycle; avoid stopping early]
Rollback Plan:  [Condition that triggers immediate halt + how to revert]
Owner:          [Name] | Launch Date: [Date] | Review Date: [Date]

Step 2 — Validate Before Launch

  • Is the primary metric measurable in the current analytics stack?
  • Is the sample size achievable within the planned duration?
  • Are there confounding events (product releases, holidays) in the window?
  • Has the control variant been confirmed working in production?
  • Is the rollback plan automated or is manual action documented?

Step 3 — Analyze Results

  1. Pull data segmented by variant at the end of the planned duration — do not peek early.
  2. Calculate statistical significance: χ² test for proportions, t-test for continuous metrics.
  3. Check for novelty effect: compare Week 1 vs. Week 2 performance within the experiment window.
  4. Report: lift %, confidence interval, p-value, and business impact (revenue or users/month).
  5. Document result in the Experiment Log (win / lose / inconclusive + learnings).

Step 4 — Ship or Kill Decision Tree

p < 0.05 AND lift > minimum detectable effect?
 ├─ YES → Ship to 100%; log as Win; update growth model
 └─ NO  → p ≥ 0.05?
           ├─ YES (inconclusive) → Redesign with larger sample or clearer hypothesis
           └─ NO (negative result) → Log as Loss; document why; do not re-run without new insight

Funnel Analysis Workflow

Step 1 — Map the Funnel Stages

Define each stage with a measurable event:

StageEvent NameDefinition
Acquisitionsession_startFirst visit from any channel
Activationaha_momentCore value action (define per product)
RetentionD7_returnSession on Day 7 ± 1
Revenuefirst_paymentFirst completed transaction
Referralinvite_sentReferral link generated and sent

Step 2 — Calculate Drop-Off at Each Stage

Conversion Rate (stage N→N+1) = Users completing N+1 / Users completing N × 100
Drop-off Rate = 100 − Conversion Rate
Revenue Leak = Drop-off Rate × Avg. LTV per user

Step 3 — Identify the Biggest Lever

  • Sort stages by Revenue Leak (not just drop-off %).
  • The highest Revenue Leak stage is Priority 1 for experimentation.
  • Secondary: look for stages where drop-off rate exceeds category benchmark (Activation < 40%, D7 Retention < 25% are common red flags).

Step 4 — Diagnose Root Cause

For the priority stage, gather:

  1. Session recordings / heatmaps at the drop-off point.
  2. Exit surveys: "What stopped you from [completing action]?"
  3. Cohort comparison: do users from Channel A activate better than Channel B?
  4. Time-to-activation distribution: is there a spike in time that signals friction?

Referral Program Design Checklist

Use this checklist before building or recommending a referral program.

Mechanics

  • Define the trigger event: when does the referral prompt appear? (post-activation, not at signup)
  • Choose incentive structure: double-sided (both referrer + referee rewarded) outperforms single-sided
  • Set reward type: cash/credit (high intent), discount (lower friction), status/badge (community products)
  • Cap maximum referral rewards per user to prevent fraud loops

Viral Coefficient Calculation

K-factor = i × c
  i = average invites sent per existing user
  c = conversion rate of invitees to active users

K > 1.0 → viral growth (each user generates >1 new user)
K = 0.5–1.0 → amplification (referrals meaningfully supplement other channels)
K < 0.5 → referral program is decorative; fix activation before optimizing referrals

Validation Checkpoints

  • Measure K-factor in Week 1, Week 4, and Week 12 — decay is normal; re-prompt strategy needed after Week 4.
  • Track referral → activation rate separately from referral → signup rate (signups without activation are vanity metrics).
  • A/B test reward framing: "Give $10, Get $10" vs. "Share and both of you save $10."

ICE / RICE Scoring Template

Use this to prioritize a backlog of growth ideas before committing to experiments.

ICE Score (quick triage)

ICE = (Impact + Confidence + Ease) / 3
  Impact:     1–10  How much will this move the North Star Metric?
  Confidence: 1–10  How certain are we this will work? (10 = proven elsewhere)
  Ease:       1–10  How fast/cheap to implement? (10 = < 1 day, no eng needed)

RICE Score (full prioritization)

RICE = (Reach × Impact × Confidence) / Effort
  Reach:      Users/month affected
  Impact:     0.25 / 0.5 / 1 / 2 / 3 (massive) — impact on primary metric per user
  Confidence: % expressed as decimal (0.8 = 80% confident)
  Effort:     Person-weeks to design, build, and launch

Rank ideas by RICE score. Assign top 3 to active experiment slots; park the rest.


Channel Prioritization

Default to shortest Time to Signal when CAC data is absent. Use paid channels to validate demand before investing in SEO or partnerships. Key decision rule: if CAC payback period exceeds 12 months, deprioritize that channel regardless of scalability.

ChannelRelative CACTime to SignalScalability
Paid Search$1–2 weeksHigh
Referral/Viral$2–4 weeksMedium
Product-Led (PLG)$2–4 weeksVery High
SEO/Content$$3–6 monthsHigh
PR/Earned Media$$VariableLow
Partnerships$$$1–3 monthsMedium

North Star Metric & Growth Model

Selecting the North Star Metric

The NSM must reflect value delivered to the user (not vanity metrics like total signups), correlate with long-term revenue retention, and resolve to a single number. If stakeholders debate two metrics, run a 30-day cohort correlation and pick the stronger predictor of retained revenue.

Growth Model Formula

New Users/Month = (Organic + Paid + Referral + SEO) × Activation Rate
Net Growth      = New Users − Churned Users
Revenue Growth  = Net Growth × Avg. Revenue per Activated User

Populate this model with actuals monthly. The input growing fastest becomes the focus channel for the next quarter.


Experiment Velocity & Retrospective Cadence

  • Target: 4–8 experiments/month (prioritize quality of hypothesis over raw count).
  • Weekly: Check live experiments for SRM (Sample Ratio Mismatch — expected vs. actual split should not deviate > 1%).
  • Monthly: Retrospective — categorize wins by funnel stage to find patterns.
  • Quarterly: Rebuild ICE/RICE backlog from scratch; discard ideas older than 90 days without re-scoring.
Repository
OpenRoster-ai/awesome-agents
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.