Design, validate, and plan a startup from scratch. Covers market research, competitive analysis, business model, brand identity, product definition, financial projections, and validation experiments. Trigger when the user has a startup idea to explore, wants to validate a business concept, needs a business plan or lean canvas, asks for market sizing or competitive landscape, wants brand positioning or go-to-market strategy, or says anything like "I have an idea for..." or "is this idea worth pursuing". Also handles resuming from a previous checkpoint.
Install with Tessl CLI
npx tessl i github:ferdinandobons/startup-skill --skill startup-design94
Does it follow best practices?
Evaluation — 99%
↑ 1.73xAgent success when using this skill
Validation for skill structure
A structured, multi-phase skill that takes a startup idea from raw concept to validated design. It produces a complete set of markdown documents organized by domain, with built-in progress tracking so work survives session interruptions.
The process has 8 phases executed sequentially. Each phase produces output files and updates the progress tracker. If a session is interrupted, resume from the last completed checkpoint.
INTAKE → BRAINSTORM → RESEARCH → STRATEGY → BRAND → PRODUCT → FINANCIAL → VALIDATIONFull Mode (default): Execute all 8 phases in order. Best for thoroughly designing a startup from scratch.
Fast Track Mode: When the user says they want a "quick validation," "rapid assessment," or similar, or when time/budget is clearly limited, run a compressed version:
Fast Track produces fewer files but still gives the founder a clear go/no-go signal with evidence. Note in PROGRESS.md that Fast Track mode was used, so a future session can expand to full mode if the idea passes validation.
Default output language is English. If the user writes in another language or explicitly requests one, use that language for all outputs instead.
Reference: Read
references/output-guidelines.mdonce at the start. It defines the standard file header/footer (title, date, phase, confidence, flags), cross-phase referencing format, quality examples of good vs. bad output, and how to handle mid-process pivots.
Before anything else, check if a PROGRESS.md file exists in the working directory (or a project subdirectory). If it does, read it and resume from the last incomplete phase. Tell the user: "I found progress from a previous session. You completed [phases]. Picking up from [next phase]."
If no progress file exists, start from Phase 1.
The quality of everything downstream depends on how much context you extract now. Don't rush this — a thorough intake saves hours of misdirection later.
Ask these in a conversational flow, not as a rigid checklist. Group related questions naturally and adapt based on answers. Not every question applies to every startup — skip what's irrelevant.
The Idea
The Founder(s)
The Market
The Business
Constraints & Preferences
After the core questions, ask these deliberately uncomfortable questions. They surface blind spots early:
Don't skip these — they set the tone for the entire process and signal that this is an honest assessment, not a cheerleading session.
Save the consolidated intake to {project-name}/00-intake/brief.md with all captured information organized clearly. The project name should be derived from the startup idea (kebab-case, e.g., pet-health-tracker).
Create PROGRESS.md at the project root with: project name, start date, language, a checklist of all 8 phases (mark Phase 1 complete), and a Notes section for session state.
Before diving into research, explore the idea space. This prevents premature convergence on the first version of the idea.
Diverge — Generate 5-8 variations of the core idea. Push boundaries:
Analyze — For each variation, note:
Converge — Present the variations to the user. Help them identify which elements resonate. The goal isn't to pick one variation — it's to enrich the original idea with insights from the exploration.
Refine — Based on the user's reactions, crystallize the refined idea. Update the brief if the idea evolved significantly.
Save to {project-name}/00-intake/brainstorm.md. Update PROGRESS.md.
This is the most resource-intensive phase. It uses 4 sequential waves of web research, each building on the previous one's findings.
Check if the Agent tool is available (Claude Code) or not (Claude.ai, other environments):
Phase 3 requires WebSearch. In Claude Code, the tool is always available — if the user hasn't pre-approved it, the system will prompt them for each search. If the user denies permission, or in environments where WebSearch doesn't exist at all, fall back to Knowledge-Based Research Mode: use your training data, clearly mark all findings with [Knowledge-Based — not live data, verify independently], reduce confidence ratings by one level, and recommend the founder verify key claims manually. Note the mode in PROGRESS.md so future sessions know the research wasn't web-sourced.
References — Read the relevant file for each wave:
references/research-principles.md— Cross-cutting rules (source quality, cross-referencing, quantification, handling search failures). Read this FIRST.references/research-wave-1-market.md— Agent templates for Wave 1 (market sizing, trends, regulatory)references/research-wave-2-competitors.md— Agent templates for Wave 2 (direct, indirect, GTM analysis)references/research-wave-3-customers.md— Agent templates for Wave 3 (customer voice, demand, audience)references/research-wave-4-distribution.md— Agent templates for Wave 4 (channels, geographic entry)references/research-synthesis.md— How to synthesize raw findings into final deliverablesRead only the principles file + the wave file you're currently executing. Don't load all wave files at once.
Wave 1: Market Landscape (3 agents in parallel, or 3 sequential research blocks)
Complete Wave 1 before starting Wave 2. Pass key findings as context.
Wave 2: Competitive Analysis (3 agents in parallel, or 3 sequential research blocks)
Complete Wave 2 before starting Wave 3. Pass competitor list and GTM findings as context.
Wave 3: Customer & Demand (3 agents in parallel, or 3 sequential research blocks)
Complete Wave 3 before starting Wave 4.
Wave 4: Distribution & Partnerships (2 agents in parallel, or 2 sequential research blocks)
All agents save raw findings to {project-name}/01-discovery/raw/. After all waves complete, synthesize into 4 polished deliverables. The synthesis must:
{project-name}/01-discovery/market-analysis.md — Market size (TAM/SAM/SOM), growth, maturity, regulatory summary, timing assessment{project-name}/01-discovery/competitor-landscape.md — Competitor profiles, structured comparison matrix (table with columns: Name, Product, Pricing, Target, Funding, Traction, Key Strength, Key Weakness), positioning map, platform risk, vulnerability analysis{project-name}/01-discovery/target-audience.md — Persona(s), pain hierarchy, jobs-to-be-done, language map, buying behavior, channels{project-name}/01-discovery/industry-trends.md — Tech trends, investment signals, behavioral shifts, regulatory trajectory, strategic implications{project-name}/01-discovery/confidence-dashboard.md — Summary of data quality across all research. For each major claim, list: the claim, source tier (1/2/3), number of corroborating sources, confidence level (High/Medium/Low), and data age. This tells the founder where they're standing on solid ground vs. thin ice.Update PROGRESS.md.
Before investing time in Strategy through Validation, pause and present the founder with an honest assessment based on research findings. This is a decision point, not a formality.
Present a brief summary: "Here's what the research found." Cover market size, competition intensity, customer demand signals, and timing. Then give a clear recommendation:
Ask the founder: "Based on this, do you want to continue to full strategy, pivot the idea, or stop here?" Respect their decision, but make sure it's an informed one. Save the gate assessment in {project-name}/01-discovery/research-gate.md.
With research in hand, define the strategic foundations. Each document should reference specific findings from Phase 3 — strategy disconnected from research is just guessing.
Reference: Read
references/frameworks.mdfor canonical definitions of Lean Canvas, April Dunford Positioning, Value Proposition Canvas, and RICE/MoSCoW prioritization. Use these to ensure consistent, accurate application of each framework.
Build a complete Lean Canvas (1-page business model) in 02-strategy/lean-canvas.md:
In 02-strategy/value-proposition.md, define:
In 02-strategy/business-model.md, detail:
In 02-strategy/positioning.md, using April Dunford's positioning framework:
In 02-strategy/go-to-market.md:
Update PROGRESS.md.
Checkpoint: Before starting, briefly present the strategy summary to the founder: positioning, target market, business model. Ask: "Does this reflect your vision? Anything to adjust before we build the brand on top of it?"
Translate strategy into brand identity. The brand should feel like a natural extension of the positioning — not an afterthought.
In 03-brand/mission-vision-values.md:
Generate 2-3 options for mission and vision for the user to choose from or remix.
In 03-brand/tone-of-voice.md:
In 03-brand/brand-personality.md:
Update PROGRESS.md.
Define the product enough to start building or to brief a development team. Use the competitor feature analysis from 01-discovery/competitor-landscape.md and customer pain hierarchy from 01-discovery/target-audience.md to inform feature decisions — don't design in a vacuum.
Reference: Use RICE or MoSCoW from
references/frameworks.mdfor feature prioritization.
In 04-product/mvp-definition.md:
In 04-product/feature-prioritization.md:
In 04-product/user-journey.md:
Update PROGRESS.md.
Checkpoint: Before projections, confirm key assumptions with the founder: pricing, target customer volume, team size, timeline. These directly drive the numbers — getting them wrong here means the projections are fiction.
Ground the strategy in numbers. Be honest about assumptions — label everything as estimated and explain the reasoning. Pull unit economics benchmarks (CAC, LTV, churn, ACV) from 01-discovery/market-analysis.md and competitor pricing from 01-discovery/competitor-landscape.md to anchor projections in real data.
Reference: Read
references/industry-benchmarks.mdfor standard metrics by business model type (SaaS, marketplace, e-commerce, etc.). Compare the founder's projections against these benchmarks and flag any that fall outside normal ranges — both too pessimistic and too optimistic.
In 05-financial/revenue-model.md:
In 05-financial/cost-structure.md:
In 05-financial/projections.md:
Update PROGRESS.md.
This is the most actionable phase — it tells the founder exactly what to do next to test whether the idea works.
In 06-validation/validation-playbook.md:
Tailor experiments to the specific idea — a B2B SaaS needs different validation than a consumer marketplace.
In 06-validation/risk-analysis.md:
In 06-validation/assumptions-tracker.md:
Format as a table for easy scanning and updating.
In 06-validation/experiment-design.md:
In 06-validation/kill-criteria.md, define 5-7 specific, measurable conditions under which the founder should stop or pivot. Tie each to a validation experiment. Be specific: "If fewer than 3/10 interview subjects say they'd pay $X" not "if there's no demand." This protects the founder from sunk-cost thinking.
At the end of the validation section, produce a summary scorecard in 06-validation/scorecard.md:
| Dimension | Score (1-10) | Rationale |
|---|---|---|
| Problem severity | ||
| Market size | ||
| Competitive advantage | ||
| Feasibility | ||
| Business model clarity | ||
| Founder-market fit | ||
| Timing | ||
| Overall |
Be honest. If the idea has weaknesses, say so clearly. The goal is to help the founder make a good decision, not to validate their ego. Include a clear Verdict paragraph after the table with an unambiguous recommendation (see the scoring guide in the Radical Honesty Protocol).
Update PROGRESS.md — mark all phases complete.
After all phases are complete, produce two final files:
README.md at the project root — executive summary:
action-plan-30-days.md — concrete weekly plan for the first month:
Anti-pattern check: Before finalizing, scan the entire output for common founder anti-patterns and flag any you detect: "solution looking for a problem," "boiling the ocean" (too many features/markets at once), "premature scaling," "vanity metrics," "building in stealth too long," "ignoring unit economics." Include a brief Anti-Patterns Detected section in the README if any are present.
Reference: Read
references/honesty-protocol.mdat the start of every session for the full protocol. The key rules are summarized here.
This skill helps founders make good decisions, not feel good. Honesty is non-negotiable:
The references/ directory contains supporting documentation. Read only what you need for the current phase.
| File | When to Read | Lines |
|---|---|---|
output-guidelines.md | At the start of every session (once) | ~70 |
honesty-protocol.md | At the start of every session (once) | ~80 |
research-principles.md | Before starting Phase 3 (once) | ~60 |
research-wave-1-market.md | When spawning Wave 1 agents | ~130 |
research-wave-2-competitors.md | When spawning Wave 2 agents | ~170 |
research-wave-3-customers.md | When spawning Wave 3 agents | ~170 |
research-wave-4-distribution.md | When spawning Wave 4 agents | ~110 |
research-synthesis.md | After all waves complete, before writing final files | ~90 |
frameworks.md | During Phase 4 (Strategy) and Phase 6 (Product) | ~110 |
industry-benchmarks.md | During Phase 7 (Financial) | ~80 |
f69cd63
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.