PostHog analytics, event tracking, feature flags, dashboards
48
37%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/posthog-analytics/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is a bare comma-separated list of capability areas without any verb-based actions or explicit trigger guidance. While 'PostHog' provides some distinctiveness, the lack of concrete actions and a 'Use when...' clause significantly weakens its utility for skill selection among many options.
Suggestions
Add a 'Use when...' clause specifying triggers, e.g., 'Use when the user mentions PostHog, product analytics, event tracking setup, or feature flag configuration.'
Convert the noun list into specific concrete actions, e.g., 'Configures PostHog event tracking, creates and manages feature flags, builds analytics dashboards, and sets up A/B experiments.'
Include additional natural trigger terms users might say, such as 'A/B testing', 'experiments', 'product analytics', 'user behavior tracking', or 'PostHog API'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (PostHog) and lists some capabilities (analytics, event tracking, feature flags, dashboards), but these are more like category labels than concrete actions. It doesn't describe specific actions like 'create dashboards', 'configure feature flags', or 'instrument event tracking'. | 2 / 3 |
Completeness | Provides a partial 'what' (lists capability areas) but completely lacks a 'when' clause. There is no 'Use when...' or equivalent explicit trigger guidance, and per the rubric guidelines, a missing 'Use when...' clause should cap completeness at 2, but since the 'what' is also weak (just a comma-separated list of nouns), this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes relevant keywords like 'PostHog', 'analytics', 'event tracking', 'feature flags', and 'dashboards' that users might naturally mention. However, it's missing common variations and related terms like 'A/B testing', 'experiments', 'user analytics', 'product analytics', or 'metrics'. | 2 / 3 |
Distinctiveness Conflict Risk | 'PostHog' is a distinctive product name that helps differentiate this skill, but terms like 'analytics', 'dashboards', and 'feature flags' are generic enough to overlap with other analytics or feature flag tools (e.g., LaunchDarkly, Mixpanel, Amplitude). | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill excels at actionability with complete, executable code examples across multiple frameworks and thorough event tracking patterns. However, it is severely bloated—much of the content (dashboard checklists for 5 project types, 4 framework installations, generic analytics philosophy) should be split into referenced files. The monolithic structure undermines usability and wastes token budget on content that is either generic knowledge or should be progressively disclosed.
Suggestions
Split framework-specific installation guides into separate files (e.g., nextjs.md, react.md, python.md, node.md) and reference them from the main skill with a brief table.
Move dashboard templates into a separate DASHBOARDS.md file, keeping only a summary table with links in the main skill.
Remove the Philosophy section entirely—Claude already understands analytics principles and this adds no actionable guidance.
Integrate a validation step into the setup workflow: after installation, include a concrete verification step (e.g., 'Check PostHog Live Events tab to confirm events are arriving') before proceeding to event tracking patterns.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | This is extremely verbose at ~600+ lines. It includes extensive dashboard template checklists for SaaS, E-Commerce, Content, and AI apps that are generic product management knowledge Claude already knows. The philosophy section, multiple framework installation guides, and exhaustive event naming examples add significant bloat. Much of this could be split into separate reference files. | 1 / 3 |
Actionability | The skill provides fully executable, copy-paste ready code across multiple frameworks (Next.js, React, Python, Node.js). Code examples are complete with imports, configuration, and real usage patterns. The MCP dashboard creation workflow includes concrete API calls. | 3 / 3 |
Workflow Clarity | The MCP dashboard creation workflow has a clear sequence (check existing → create dashboard → create insights → add to dashboard), but lacks validation checkpoints. There's no verification step to confirm events are actually being received, no troubleshooting for common setup failures, and the testing section is separate from the setup workflow rather than integrated as a validation step. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with everything inline. The header mentions 'Load with: base.md + [framework].md' suggesting a multi-file structure exists, but the skill itself dumps installation for 4 frameworks, dashboard templates for 5 project types, privacy compliance, testing, and debugging all in one file. The dashboard templates and framework-specific installation guides should clearly be in separate referenced files. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (958 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
d4ddb03
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.