Design, audit, and improve analytics tracking systems that produce reliable, decision-ready data.
Install with Tessl CLI
npx tessl i github:sickn33/antigravity-awesome-skills --skill analytics-tracking59
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillAgent success when using this skill
Validation for skill structure
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear domain (analytics tracking) and lists relevant high-level actions, but lacks the explicit trigger guidance ('Use when...') that is critical for skill selection. The trigger terms are adequate but could be expanded with more natural user language and tool-specific keywords to improve discoverability.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios like 'Use when the user asks about event tracking, analytics implementation, data quality audits, or tracking plan reviews'
Include more natural trigger terms users would say: 'event tracking', 'tracking plan', 'analytics instrumentation', 'data quality', 'Mixpanel', 'Amplitude', 'Google Analytics'
Make actions more concrete by specifying outputs: 'create tracking plans', 'audit event schemas', 'identify data gaps', 'validate tracking implementation'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (analytics tracking systems) and lists three actions (design, audit, improve), but these actions are somewhat high-level and don't specify concrete techniques or outputs like 'create event schemas' or 'validate data pipelines'. | 2 / 3 |
Completeness | Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing explicit trigger guidance caps this at 2, but the 'when' is entirely absent, warranting a 1. | 1 / 3 |
Trigger Term Quality | Includes relevant terms like 'analytics', 'tracking', and 'data', but misses common variations users might say such as 'event tracking', 'metrics', 'instrumentation', 'Google Analytics', 'Mixpanel', or 'telemetry'. | 2 / 3 |
Distinctiveness Conflict Risk | The focus on 'analytics tracking systems' provides some specificity, but 'data' and 'analytics' are broad terms that could overlap with general data analysis, business intelligence, or dashboard skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a solid strategic framework for analytics measurement with clear phasing and decision gates. However, it leans heavily toward conceptual guidance rather than executable implementation—lacking concrete code examples for dataLayer pushes, GTM configurations, or validation scripts. The content is well-structured but could benefit from splitting detailed sections into reference files and adding actionable code snippets.
Suggestions
Add executable code examples for dataLayer event pushes and GTM tag configurations (e.g., actual JavaScript snippets for signup_completed events)
Include a concrete validation script or debugging checklist with specific tool commands (e.g., GA4 DebugView steps, browser console checks)
Split the detailed scoring rubric and tool-specific guidance (GA4/GTM section) into separate reference files to improve progressive disclosure
Add a concrete worked example showing the full workflow from business question → event design → implementation → validation
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some redundancy (e.g., repeated emphasis on 'non-negotiable' principles, verbose category definitions). The scoring rubric tables are useful but could be more compact. | 2 / 3 |
Actionability | Provides structured frameworks and output templates, but lacks executable code examples. The guidance is conceptual rather than copy-paste ready—no actual GTM code, dataLayer examples, or validation scripts are provided. | 2 / 3 |
Workflow Clarity | Clear phased workflow (Phase 0 → Phase 1) with explicit gating ('If verdict is Broken, stop and recommend remediation first'). The sequence is logical with validation checkpoints built into the process. | 3 / 3 |
Progressive Disclosure | Content is well-organized with clear sections and references to related skills, but the document is monolithic (~300 lines). Tool-specific guidance (GA4/GTM) and detailed validation procedures could be split into separate reference files. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.