CtrlK
BlogDocsLog inGet started
Tessl Logo

what-would-lenny-do

Answers product strategy, growth, pricing, hiring, and leadership questions using Lenny Rachitsky's archive. ONLY use this skill if the `lennysdata` MCP server is connected and its tools (search_content, read_content, etc.) are available. If the lennysdata MCP is not connected, do NOT use this skill — respond using your own knowledge instead.

82

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./product-skills/skills/what-would-lenny-do/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

77%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description does well at specifying concrete domains and providing explicit conditional usage guidance tied to MCP server availability. However, the trigger terms are somewhat generic product management concepts that could conflict with other skills, and it could benefit from more user-facing trigger terms like 'Lenny' or 'newsletter'. The technical gating condition (MCP server check) is a strong distinguishing feature but doesn't help with natural language matching.

Suggestions

Add natural user-facing trigger terms like 'Lenny', 'Lenny's newsletter', 'Lenny's podcast' since users referencing this source would likely mention Lenny by name.

Consider narrowing or qualifying the broad topic terms (product strategy, growth, pricing, hiring, leadership) to reduce overlap with general-purpose skills — e.g., 'using insights from Lenny Rachitsky's newsletter and podcast archive'.

DimensionReasoningScore

Specificity

Lists multiple specific domains: product strategy, growth, pricing, hiring, and leadership questions. Also specifies the data source (Lenny Rachitsky's archive) and concrete tool names (search_content, read_content).

3 / 3

Completeness

Clearly answers 'what' (answers product strategy, growth, pricing, hiring, and leadership questions using Lenny's archive) and 'when' (ONLY when the lennysdata MCP server is connected and its tools are available). The conditional trigger guidance is explicit and well-defined.

3 / 3

Trigger Term Quality

Includes good domain keywords like 'product strategy', 'growth', 'pricing', 'hiring', and 'leadership', but these are fairly broad terms that users might use in many contexts. Missing more specific trigger terms like 'Lenny', 'newsletter', 'podcast', or product management jargon users would naturally say.

2 / 3

Distinctiveness Conflict Risk

The MCP server dependency creates a strong technical gate, but the topic areas (product strategy, growth, pricing, hiring, leadership) are very broad and could easily overlap with general knowledge skills or other product management skills. The Lenny Rachitsky specificity helps but the broad topic terms increase conflict risk.

2 / 3

Total

10

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-crafted skill with excellent actionability and workflow clarity — the five-phase process is clearly sequenced, tool usage is specific, and the examples effectively demonstrate the expected behavior. Its main weakness is verbosity: Phase 1 explains question-understanding techniques Claude already possesses, the search tips partially duplicate phase instructions, and four full examples with identical structure add tokens without proportional insight. The skill would benefit from trimming redundant guidance and potentially splitting examples into a separate reference file.

Suggestions

Trim Phase 1 significantly — Claude already knows how to extract core questions and infer user context. A single sentence like 'Extract the core question, domain, and key terms before searching' suffices.

Consolidate the 'Search Strategy Tips' into Phase 2 directly, removing duplicated advice about using concrete terms and broadening queries.

Reduce examples from 4 to 2 (they follow an identical pattern), or move them to a separate EXAMPLES.md file to keep the main skill leaner.

DimensionReasoningScore

Conciseness

The skill is well-written but verbose for what it conveys. Phase 1's bullet points about understanding the question explain things Claude already knows how to do (extracting core questions, identifying domains, inferring user roles). The search strategy tips and troubleshooting sections repeat guidance already covered in the phases. The four detailed examples, while useful, could be more compact — each follows the same pattern and the repetition adds tokens without proportional value.

2 / 3

Actionability

The skill provides highly concrete, executable guidance: specific MCP tool names (search_content, read_content, read_excerpt, list_content), exact search query examples ('pricing AI product outcomes', 'stalled growth logo retention'), clear output format with source citation templates, and four worked examples showing the full search-to-answer pipeline. Every phase has specific, actionable instructions.

3 / 3

Workflow Clarity

The five-phase workflow is clearly sequenced with explicit decision points: how many searches to run, when to use full read vs excerpt, how many pieces to read (2-4 cap), and a structured output format. The troubleshooting section provides error recovery paths for common failure modes (no results, tangential results, broad questions, conflicting advice). The gotchas section adds validation constraints (don't cite unread content, don't read more than 4 pieces).

3 / 3

Progressive Disclosure

The content is a single monolithic file with no references to supporting documents, which is acceptable given no bundle files exist. However, at ~200+ lines, the examples and troubleshooting sections could reasonably be split into separate files. The content is well-organized with clear headers, but the length pushes against what should be inline in a SKILL.md overview.

2 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
amplitude/builder-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.