CtrlK
BlogDocsLog inGet started
Tessl Logo

lead-scoring

When a founder needs to qualify inbound leads, define their ICP, build a lead scoring model, set MQL criteria, or route prospects through pipeline stages. Activate when the user mentions lead scoring, ICP, MQL, SQL, lead qualification, inbound leads, or pipeline design.

85

Quality

82%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly articulates specific capabilities (qualifying leads, defining ICP, building scoring models, setting MQL criteria, routing prospects) and provides explicit activation triggers. The description covers natural user language well and occupies a distinct niche. Minor note: it uses second-person framing ('When a founder needs') rather than third-person voice, but the actions themselves are described in infinitive form rather than first/second person directly, so the impact is minimal.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: qualify inbound leads, define ICP, build a lead scoring model, set MQL criteria, route prospects through pipeline stages.

3 / 3

Completeness

Clearly answers both 'what' (qualify leads, define ICP, build scoring models, set MQL criteria, route prospects) and 'when' with an explicit 'Activate when...' clause listing specific trigger terms.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: lead scoring, ICP, MQL, SQL, lead qualification, inbound leads, pipeline design. These cover common variations and terms founders naturally use.

3 / 3

Distinctiveness Conflict Risk

Targets a clear niche of lead qualification and pipeline design for founders. The specific terms like ICP, MQL, SQL, lead scoring model are distinct enough to avoid conflicts with general sales or marketing skills.

3 / 3

Total

12

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable skill that provides concrete frameworks for lead scoring with specific thresholds, verdict categories, and worked examples. Its main weaknesses are moderate verbosity (some sections over-explain rationale Claude doesn't need) and the absence of explicit validation checkpoints in the workflow for what is essentially a model-building process. The content would benefit from tightening and splitting detailed reference material into separate files.

Suggestions

Add an explicit validation step in the workflow (e.g., 'Score 10 known closed-won and closed-lost deals to verify the model ranks them correctly before applying to new leads').

Move the detailed scoring model tables, MQL threshold definitions, and examples into a referenced LEAD-SCORING-REFERENCE.md file to reduce the main skill's token footprint.

Trim explanatory rationale sentences that Claude doesn't need (e.g., 'A lead missing company size data is not the same as a lead with the wrong company size' and 'Neither alone is sufficient') to improve conciseness.

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some unnecessary elaboration. Sections like 'Handling Unknown Data' over-explain the rationale ('A lead missing company size data is not the same as a lead with the wrong company size'), and the 'When to Use' section largely duplicates the frontmatter description. The multi-dimensional scoring section restates concepts already covered in the workflow. Could be tightened by ~25%.

2 / 3

Actionability

The skill provides highly concrete, actionable guidance: specific score thresholds (0-100 with defined ranges), exact verdict categories with actions, a dual-threshold MQL definition with specific point values, a complete scoring model structure, pipeline overlap routing rules, and worked examples with sample lead tables showing how scores map to verdicts and routing decisions.

3 / 3

Workflow Clarity

The 8-step workflow is clearly sequenced and logically ordered, with pipeline overlap checking before scoring. However, there are no explicit validation checkpoints or feedback loops — no step says 'verify scores against known outcomes' or 'validate the model before deploying.' The 'Maintaining and Iterating' section mentions recalibration but as a post-hoc practice, not as an in-workflow validation step. For a scoring model that could misroute leads, this is a gap.

2 / 3

Progressive Disclosure

The content is well-organized with clear headers and sections, and references related skills at the end. However, the document is quite long (~150+ lines of substantive content) with detailed frameworks, scoring tables, and examples all inline. The scoring model details, MQL definitions, and examples could be split into referenced files. No external references are provided for deeper topics.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
shawnpang/startup-founder-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.