CtrlK
BlogDocsLog inGet started
Tessl Logo

posthog-analytics

PostHog analytics, event tracking, feature flags, dashboards

38

Quality

37%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/posthog-analytics/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is a bare comma-separated list of capabilities tied to PostHog. While it names the specific platform, it lacks concrete action verbs, explicit trigger guidance ('Use when...'), and sufficient keyword coverage to reliably distinguish it from other analytics or feature flag skills.

Suggestions

Add a 'Use when...' clause specifying triggers, e.g., 'Use when the user mentions PostHog, needs to set up event tracking, configure feature flags, or build analytics dashboards in PostHog.'

Replace the noun list with concrete action phrases, e.g., 'Configures PostHog event tracking, creates and manages feature flags, builds analytics dashboards, and instruments custom events.'

Include common related terms and variations like 'A/B testing', 'experiments', 'posthog-js', 'user funnels', 'cohorts' to improve trigger term coverage.

DimensionReasoningScore

Specificity

Names the domain (PostHog) and lists some capabilities (analytics, event tracking, feature flags, dashboards), but these are more like category labels than concrete actions. It doesn't describe specific actions like 'create dashboards', 'configure feature flags', or 'instrument event tracking'.

2 / 3

Completeness

It partially answers 'what does this do' with a list of capabilities, but there is no 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and since the 'what' is also weak (just a noun list), this scores a 1.

1 / 3

Trigger Term Quality

Includes relevant keywords like 'PostHog', 'analytics', 'event tracking', 'feature flags', and 'dashboards' that users might naturally mention. However, it's missing common variations and related terms like 'A/B testing', 'experiments', 'user analytics', 'metrics', or 'posthog-js'.

2 / 3

Distinctiveness Conflict Risk

The mention of 'PostHog' specifically is distinctive and helps differentiate from generic analytics skills. However, terms like 'analytics', 'dashboards', and 'feature flags' are broad enough to potentially overlap with other analytics or feature flag tools (e.g., LaunchDarkly, Mixpanel).

2 / 3

Total

7

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill excels at actionability with complete, executable code examples across multiple frameworks and languages. However, it is severely over-long and monolithic—dashboard templates, multi-framework installation guides, and extensive event catalogs all inline create a massive token burden. The content would be dramatically improved by splitting into focused sub-files with a concise SKILL.md overview, and by adding validation checkpoints to workflows.

Suggestions

Split framework-specific installation guides into separate files (e.g., NEXTJS_SETUP.md, PYTHON_SETUP.md) and reference them from a concise overview section.

Move dashboard templates into a separate DASHBOARD_TEMPLATES.md file, keeping only a summary table and links in the main skill.

Remove the philosophy section and generic dashboard checklist items (e.g., 'DAU/WAU/MAU trends', 'Retention cohorts') that represent standard product analytics knowledge Claude already has.

Add validation checkpoints to workflows: e.g., after initialization verify events appear in PostHog's Live Events view, and after dashboard creation confirm the dashboard ID is returned successfully.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~500+ lines. It includes extensive dashboard template checklists for SaaS, E-Commerce, Content, and AI apps that are generic product management knowledge Claude already knows. The philosophy section, user properties interface, and many event examples are padded beyond what's needed. Much of this could be condensed or split into separate reference files.

1 / 3

Actionability

The skill provides fully executable, copy-paste ready code across multiple frameworks (Next.js, React, Python, Node.js). Code examples are complete with imports, configuration, and real API calls. The feature flag examples, testing fixtures, and sanitization utilities are all concrete and immediately usable.

3 / 3

Workflow Clarity

The dashboard creation workflow has a clear sequence (create dashboard → create insights → save → add to dashboard), and the MCP section lists steps. However, there are no validation checkpoints—no verification that events are actually being received, no step to confirm dashboard creation succeeded, and no error recovery guidance for common issues like misconfigured API keys or failed event delivery.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with no bundle files to offload content to. The dashboard templates for 4+ project types, installation guides for 4 frameworks, and extensive event catalogs should be split into separate reference files. There are no internal references to supplementary files, and the content would benefit enormously from a concise overview pointing to detailed sub-documents.

1 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (957 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
alinaqi/claude-bootstrap
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.