CtrlK
BlogDocsLog inGet started
Tessl Logo

ai-startup-building

Builds AI-native products using Dan Shipper's 5-product playbook and Brandon Chu's AI product frameworks. Use when implementing prompt engineering, creating AI-native UX, scaling AI products, or optimizing costs. Focuses on 2025+ best practices.

67

Quality

58%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./ai-startup-building/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description has good structural completeness with explicit 'what' and 'when' clauses, and references specific frameworks that provide some distinctiveness. However, the actual capabilities described are fairly high-level categories rather than concrete actions, and the trigger terms could be expanded to capture more natural user language around AI product development.

Suggestions

Add more specific concrete actions like 'design conversation flows', 'structure prompts for reliability', 'plan token usage budgets', or 'architect human-in-the-loop workflows'

Expand trigger terms to include natural variations users might say: 'LLM app', 'chatbot', 'AI startup', 'GPT wrapper', 'AI feature design', 'product-market fit for AI'

DimensionReasoningScore

Specificity

Names the domain (AI-native products) and references specific frameworks (Dan Shipper's 5-product playbook, Brandon Chu's AI product frameworks), but the actual actions are somewhat vague - 'implementing prompt engineering', 'creating AI-native UX', 'scaling', 'optimizing costs' are broad categories rather than concrete actions.

2 / 3

Completeness

Clearly answers both what ('Builds AI-native products using Dan Shipper's 5-product playbook and Brandon Chu's AI product frameworks') and when ('Use when implementing prompt engineering, creating AI-native UX, scaling AI products, or optimizing costs') with an explicit 'Use when...' clause.

3 / 3

Trigger Term Quality

Includes some relevant keywords like 'prompt engineering', 'AI-native UX', 'AI products', 'costs', but missing common variations users might say like 'LLM app', 'chatbot design', 'AI startup', 'product strategy', or 'AI feature'. The framework author names are niche triggers that few users would naturally mention.

2 / 3

Distinctiveness Conflict Risk

The specific framework references (Dan Shipper, Brandon Chu) provide some distinctiveness, but terms like 'prompt engineering' and 'AI products' could overlap with general coding skills or other AI-related skills. The '2025+ best practices' adds some temporal specificity but is vague.

2 / 3

Total

9

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a reasonable overview of AI-native product patterns with useful templates and checklists, but lacks the executable precision and validation workflows needed for high-quality guidance. The content includes unnecessary padding (activation conditions, quotes) and the code examples are illustrative rather than copy-paste ready.

Suggestions

Replace illustrative TypeScript with executable code including actual imports and complete helper function implementations, or explicitly note these are patterns to adapt

Add a concrete workflow with validation steps: 'Build → Measure latency → Implement caching → Verify hit rate → Add model routing → Compare costs'

Remove the 'When This Skill Activates' section and quotes - these don't add actionable value

Convert the cost analysis template placeholders to a worked example with real numbers that Claude can reference

DimensionReasoningScore

Conciseness

The skill contains some unnecessary padding like the 'When This Skill Activates' section (Claude knows when to use skills), and quotes that don't add actionable value. However, the code examples and templates are reasonably efficient.

2 / 3

Actionability

Provides TypeScript code patterns and markdown templates, but the code is more illustrative than executable (missing imports, undefined helper functions like checkCache, isSimple, sleep). The cost analysis template uses placeholders rather than concrete examples.

2 / 3

Workflow Clarity

Checklists provide structure, but there's no clear sequenced workflow for building an AI product. Missing validation checkpoints - how do you verify caching works? How do you test model routing decisions? No feedback loops for iteration.

2 / 3

Progressive Disclosure

Content is organized into sections with headers, but everything is in one file with no references to external documentation. The 'Real-World Examples' and 'Key Quotes' sections add bulk without pointing to deeper resources.

2 / 3

Total

8

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
menkesu/awesome-pm-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.