CtrlK
BlogDocsLog inGet started
Tessl Logo

gtm-metrics

When the user wants to define GTM metrics, build a metrics dashboard, measure pipeline efficiency, or track AI product performance. Also use when the user mentions 'GTM metrics,' 'revenue latency,' 'pipeline metrics,' 'TTFV,' 'time-to-first-value,' 'data health,' 'attribution,' 'conversion rate,' 'CAC,' 'LTV,' 'NRR,' 'GTM dashboard,' 'magic number,' 'pipeline velocity,' or 'funnel metrics.' This skill covers GTM measurement from metric selection through dashboard design, including AI-specific cost metrics, attribution models, and weekly review cadences. Do NOT use for technical implementation, code review, or software architecture.

71

Quality

63%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./packages/skills-catalog/skills/(gtm)/gtm-metrics/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that excels across all dimensions. It provides specific actions, comprehensive trigger terms covering both acronyms and full phrases, explicit 'when to use' and 'when not to use' guidance, and a clearly defined niche. The inclusion of negative boundaries (Do NOT use for...) is a particularly effective touch for disambiguation.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: define GTM metrics, build a metrics dashboard, measure pipeline efficiency, track AI product performance. Also mentions AI-specific cost metrics, attribution models, and weekly review cadences.

3 / 3

Completeness

Clearly answers both 'what' (GTM measurement from metric selection through dashboard design, including AI-specific cost metrics, attribution models, and weekly review cadences) and 'when' (explicit 'Use when' triggers and a 'Do NOT use' exclusion clause).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say, including acronyms (CAC, LTV, NRR, TTFV), full phrases (time-to-first-value, revenue latency, pipeline velocity), and common variations (GTM metrics, GTM dashboard, funnel metrics, conversion rate).

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche in GTM metrics and dashboards. The explicit exclusion of technical implementation, code review, and software architecture further reduces conflict risk with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive in coverage but severely over-indexed on reference data at the expense of conciseness and structure. It reads like a GTM metrics encyclopedia rather than an actionable skill file—nearly every section could be condensed by 50-70% or moved to reference files. The core workflow (how to actually help a user build a metrics framework) is buried under tables of benchmarks and definitions.

Suggestions

Extract benchmark tables, attribution model comparisons, and NRR/growth rate reference data into separate reference files (e.g., benchmarks.md, attribution-models.md) and link to them from a concise overview in SKILL.md.

Add a clear top-level workflow: 1) Gather context (discovery questions), 2) Select metrics by motion/stage, 3) Design dashboard tier, 4) Set up review cadence, 5) Validate with user—with explicit checkpoints.

Cut the introductory paragraph explaining what you are and what you know—Claude doesn't need a persona description. Remove explanations of well-known concepts (what attribution is, what NRR means) and keep only the opinionated guidance.

Consolidate the Quick Reference table and the detailed metric tables—they duplicate information. Keep Quick Reference as the inline summary and move detailed tables to a reference file.

DimensionReasoningScore

Conciseness

This is extremely verbose at ~400+ lines. It includes extensive benchmark tables, lengthy explanations, and reference data that Claude already knows or could derive. The data health scoring formula, attribution model comparisons, and growth rate benchmarks are reference material that bloats the context window significantly. Much of this could be condensed or moved to separate reference files.

1 / 3

Actionability

The skill provides concrete formulas, specific benchmark numbers, and clear metric definitions with targets, which is useful. However, it's a strategy/advisory skill rather than a code skill, and the guidance is more encyclopedic than actionable—it tells Claude what metrics exist rather than giving step-by-step instructions for building specific deliverables. The examples section helps but is thin relative to the volume of content.

2 / 3

Workflow Clarity

The 'Before Starting' section provides a good discovery checklist, and the weekly review cadence has a clear time-boxed structure. However, there's no clear end-to-end workflow for the primary use case (e.g., 'user asks for a metrics dashboard → gather context → select metrics → design dashboard → validate with user'). The content reads as a reference encyclopedia rather than a sequenced process with validation checkpoints.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with all content inline. The Related Skills table at the end references other skills but the body itself contains hundreds of lines of tables and reference data that should be split into separate files (e.g., benchmarks.md, attribution-models.md, dashboard-templates.md). There are no links to supplementary files for the detailed reference material.

1 / 3

Total

6

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
tech-leads-club/agent-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.