When the user wants to define GTM metrics, build a metrics dashboard, measure pipeline efficiency, or track AI product performance. Also use when the user mentions 'GTM metrics,' 'revenue latency,' 'pipeline metrics,' 'TTFV,' 'time-to-first-value,' 'data health,' 'attribution,' 'conversion rate,' 'CAC,' 'LTV,' 'NRR,' 'GTM dashboard,' 'magic number,' 'pipeline velocity,' or 'funnel metrics.' This skill covers GTM measurement from metric selection through dashboard design, including AI-specific cost metrics, attribution models, and weekly review cadences. Do NOT use for technical implementation, code review, or software architecture.
71
63%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./packages/skills-catalog/skills/(gtm)/gtm-metrics/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that excels across all dimensions. It provides specific capabilities, extensive natural trigger terms including both acronyms and full phrases, clearly answers both what and when, and includes explicit exclusion boundaries to minimize conflict with other skills. The description is well-structured and comprehensive without being unnecessarily verbose.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: define GTM metrics, build a metrics dashboard, measure pipeline efficiency, track AI product performance. Also mentions AI-specific cost metrics, attribution models, and weekly review cadences. | 3 / 3 |
Completeness | Clearly answers both 'what' (GTM measurement from metric selection through dashboard design, including AI-specific cost metrics, attribution models, and weekly review cadences) and 'when' (explicit 'Use when' triggers and even a 'Do NOT use' exclusion clause). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say, including acronyms (CAC, LTV, NRR, TTFV), full phrases (time-to-first-value, revenue latency, pipeline velocity), and common variations (GTM metrics, GTM dashboard, funnel metrics, conversion rate). | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche in GTM metrics and dashboards. The explicit exclusion of technical implementation, code review, and software architecture further reduces conflict risk with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a comprehensive GTM metrics encyclopedia but fails as a SKILL.md due to extreme verbosity (~400+ lines of inline reference material), lack of progressive disclosure (everything is in one monolithic file), and missing workflow sequencing for the core task of building a metrics framework. Its strength is the breadth and specificity of benchmarks and metric definitions, but this content belongs in referenced sub-files, not the main skill body.
Suggestions
Extract benchmark tables (NRR by stage, growth rates, attribution models) into separate reference files like BENCHMARKS.md and ATTRIBUTION.md, keeping only the most critical 2-3 benchmarks inline with links to details.
Add a clear sequential workflow at the top: 1) Assess stage and motion → 2) Select tier-appropriate metrics → 3) Design dashboard → 4) Set up review cadence → 5) Validate with stakeholder, with explicit checkpoints.
Cut the 'Questions to Ask' section (15 questions is excessive) to 3-5 essential discovery questions, and remove explanatory text that Claude can infer (e.g., 'Attribution answers what caused the deal').
Restructure as a concise overview (under 100 lines) that references detailed sub-files for each major section (AI metrics, attribution, dashboard architecture, review cadence).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | This skill is extremely verbose at ~400+ lines. It includes extensive benchmark tables, exhaustive metric definitions, and lengthy reference sections that Claude already knows or could derive. The data health scoring formula, attribution model comparisons, and growth rate benchmark tables are reference material that bloats the context window significantly. Much of this could be condensed or moved to separate reference files. | 1 / 3 |
Actionability | The skill provides concrete metric definitions, formulas, and benchmark targets which are actionable for advisory work. However, it lacks executable code/commands for actually building dashboards, setting up CRM queries, or implementing scoring models. The 'Examples' section describes what the agent should do rather than showing concrete output formats. It's more of a reference encyclopedia than step-by-step executable guidance. | 2 / 3 |
Workflow Clarity | The 'Before Starting' section provides a clear discovery checklist, and the weekly review cadence has a well-structured time-boxed agenda. However, there's no clear workflow for the primary task of building a metrics framework—no sequenced steps like 'first assess stage, then select metrics, then design dashboard, then validate with stakeholder.' The skill reads as a reference document rather than a guided process with validation checkpoints. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of content with no references to external files for detailed material. The attribution model comparison table, all benchmark tables, the full PQL scoring model, the complete dashboard architecture, and the data health scoring system are all inline. The 'Related Skills' section at the end references other skills but the core content itself should be split into separate reference files (e.g., BENCHMARKS.md, ATTRIBUTION.md, DASHBOARD-TEMPLATES.md) with the SKILL.md serving as an overview. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
81e7e0d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.