When the user wants to build data enrichment workflows, score leads against ICP, set up Clay waterfalls, or improve contact data quality. Also use when the user mentions 'enrichment,' 'data enrichment,' 'Clay,' 'waterfall enrichment,' 'ICP scoring,' 'lead scoring,' 'intent data,' 'contact verification,' 'Apollo,' 'ZoomInfo,' or 'data quality.' This skill covers lead enrichment waterfalls, ICP scoring frameworks, and contact verification systems. Do NOT use for technical implementation, code review, or software architecture.
82
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./packages/skills-catalog/skills/(gtm)/lead-enrichment/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly defines its domain (data enrichment and lead scoring workflows), provides extensive natural trigger terms covering tool names and concepts, and explicitly delineates both when to use and when not to use the skill. The description is well-structured, uses third person voice appropriately, and would be easily distinguishable from other skills in a large skill library.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'build data enrichment workflows,' 'score leads against ICP,' 'set up Clay waterfalls,' 'improve contact data quality,' plus mentions lead enrichment waterfalls, ICP scoring frameworks, and contact verification systems. | 3 / 3 |
Completeness | Clearly answers both 'what' (build data enrichment workflows, score leads against ICP, set up Clay waterfalls, improve contact data quality) and 'when' (explicit 'Use when' clause with trigger terms, plus a 'Do NOT use' exclusion boundary). Both are explicit and well-defined. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'enrichment,' 'data enrichment,' 'Clay,' 'waterfall enrichment,' 'ICP scoring,' 'lead scoring,' 'intent data,' 'contact verification,' 'Apollo,' 'ZoomInfo,' 'data quality.' These are terms a user in this domain would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche in data enrichment/lead scoring workflows. The explicit exclusion of 'technical implementation, code review, or software architecture' further reduces conflict risk with engineering-focused skills. Domain-specific tool names like Clay, Apollo, and ZoomInfo make it very unlikely to trigger incorrectly. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
55%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill is exceptionally detailed and actionable, providing concrete formulas, thresholds, provider comparisons, and step-by-step workflows that would genuinely help build enrichment systems. However, it severely violates conciseness by including extensive reference material (pricing tables, provider comparisons, compliance matrices, benchmarks) inline rather than in separate reference files, and explains concepts Claude already understands. The content would be dramatically improved by extracting 60-70% of it into reference files and keeping SKILL.md as a lean overview with clear pointers.
Suggestions
Extract provider comparison matrices, pricing tables, and benchmark data into separate reference files (e.g., references/providers.md, references/benchmarks.md) and link to them from the main skill
Remove explanatory text Claude already knows (e.g., 'A waterfall enrichment system queries multiple data providers in sequence', what catch-all domains are, basic GDPR concepts) and keep only the decision rules and thresholds
Move compliance details to a dedicated references/compliance.md file since this is reference material, not workflow instruction
Consolidate the SKILL.md to ~100-150 lines covering: discovery questions, the ICP scoring formula with weights, the waterfall flow diagram, Clay table structure, verification thresholds, and pointers to detailed reference files
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | This is extremely verbose at 400+ lines. It explains concepts Claude already knows (what a waterfall is, what catch-all domains are, what GDPR requires), includes extensive provider comparison tables with pricing that will become stale, and provides benchmark data that could be in a reference file. The ratio of novel, actionable instruction to general knowledge is low. | 1 / 3 |
Actionability | The skill provides highly concrete, specific guidance: exact scoring formulas with weights, detailed provider selection tables by use case, specific Clay table column structures, credit math calculations, confidence thresholds with exact actions, and a complete ROI calculation framework. A user could follow these instructions to build a working enrichment system. | 3 / 3 |
Workflow Clarity | The waterfall flow is clearly sequenced with explicit validation checkpoints (pre-qualification filter, verification step with confidence thresholds, catch-all segmentation). The deliverability protection checklist serves as a final validation gate. The Clay workflow has a clear column-by-column build order. Credit governance rules include monitoring and alerting thresholds. | 3 / 3 |
Progressive Disclosure | This is a monolithic wall of text with nearly all content inline. The provider comparison matrices, pricing tables, compliance details, and benchmarks should be in reference files. There's only one reference at the very end ('references/quick-reference.md') but the SKILL.md itself contains far more detail than a quick reference would. The content that's inline dwarfs what's delegated. | 1 / 3 |
Total | 8 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
906a57d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.