Automate PostHog tasks via Rube MCP (Composio): events, feature flags, projects, user profiles, annotations. Always search tools first for current schemas.
65
47%
Does it follow best practices?
Impact
100%
1.44xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/all-skills/skills/posthog-automation/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is concise and specific about its capabilities, clearly naming the platform (PostHog), the integration method (Rube MCP/Composio), and the concrete task areas. Its main weakness is the lack of an explicit 'Use when...' clause and missing natural trigger terms users might use when needing PostHog-related help (e.g., 'analytics', 'tracking', 'A/B testing').
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about PostHog analytics, tracking events, managing feature flags, or configuring experiments.'
Include additional natural trigger terms users might say, such as 'analytics', 'tracking', 'A/B testing', 'experimentation', or 'product analytics'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: events, feature flags, projects, user profiles, annotations. Also includes a concrete procedural instruction ('Always search tools first for current schemas'). | 3 / 3 |
Completeness | Clearly answers 'what does this do' (automate PostHog tasks via Rube MCP), but lacks an explicit 'Use when...' clause. The when is only implied by the domain mention. Per rubric guidelines, missing 'Use when...' caps completeness at 2. | 2 / 3 |
Trigger Term Quality | Includes relevant keywords like 'PostHog', 'feature flags', 'events', 'annotations', 'user profiles', and 'Rube MCP (Composio)', but misses common user variations like 'analytics', 'tracking', 'A/B testing', or 'experimentation' that users might naturally say. | 2 / 3 |
Distinctiveness Conflict Risk | Very distinct niche: PostHog automation via a specific MCP tool (Rube/Composio). The combination of the specific platform (PostHog) and specific integration method (Rube MCP/Composio) makes it highly unlikely to conflict with other skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill content is severely bloated — the evaluation rubric/prompt appears to be accidentally embedded multiple times within the skill body, making it extremely long and confusing. Even setting that aside, the actual PostHog skill content is overly verbose with repeated pitfalls sections, lacks concrete executable examples (no sample tool calls with actual parameters and expected responses), and dumps everything into a single monolithic file without progressive disclosure.
Suggestions
Remove the accidentally embedded rubric/prompt content that appears multiple times within the skill body — this alone would dramatically improve conciseness.
Add concrete tool call examples with sample parameters and expected response shapes, e.g., show an actual POSTHOG_CAPTURE_EVENT call with specific event name, distinct_id, and properties, plus what the response looks like.
Consolidate the duplicated pitfalls — remove the per-workflow 'Pitfalls' subsections and keep only the 'Known Pitfalls' summary section, or vice versa.
Split detailed content (Common Patterns, Known Pitfalls, Quick Reference) into a separate REFERENCE.md file and link to it from the main skill.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. The content is massively bloated by the rubric/prompt being accidentally embedded multiple times within the skill body itself. Even ignoring that corruption, the actual skill content over-explains concepts Claude already knows (pagination, ISO 8601 format, what kebab-case is), repeats pitfalls across sections and in a dedicated 'Known Pitfalls' section, and includes unnecessary 'When to use' descriptions for straightforward operations. | 1 / 3 |
Actionability | The skill provides specific tool names, parameter lists, and a JSON example for feature flag targeting filters. However, there are no executable code examples or complete tool call examples showing exact input/output. The ID resolution patterns use pseudocode-style numbered lists rather than concrete call examples with sample parameters and responses. | 2 / 3 |
Workflow Clarity | Workflows are clearly sequenced with numbered tool steps and labeled as Required/Optional. The setup section includes a validation checkpoint (confirm ACTIVE status before proceeding). However, there are no feedback loops for error recovery in the core workflows — e.g., no guidance on what to do if event capture fails, or how to verify events were actually ingested. | 2 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with no external file references and no bundle files. All detailed parameter lists, pitfalls, common patterns, and quick reference tables are inlined in a single very long document. The content would benefit greatly from splitting detailed workflow sections and the pitfalls/patterns into separate reference files. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
d065ead
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.