CtrlK
BlogDocsLog inGet started
Tessl Logo

ab-test-analyzer

Ab Test Analyzer - Auto-activating skill for Data Analytics. Triggers on: ab test analyzer, ab test analyzer Part of the Data Analytics skill category.

34

0.98x
Quality

0%

Does it follow best practices?

Impact

98%

0.98x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/12-data-analytics/ab-test-analyzer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an extremely weak description that essentially only restates the skill name and its category. It provides no concrete actions, no meaningful trigger terms, and no guidance on when Claude should select this skill. It reads as auto-generated boilerplate with no substantive content.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Analyzes A/B test results by calculating statistical significance, comparing conversion rates between variants, determining sample size requirements, and visualizing experiment outcomes.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user mentions A/B tests, split tests, experiment analysis, conversion rates, statistical significance, variant comparison, or hypothesis testing.'

Remove the duplicate trigger term 'ab test analyzer' and expand with natural language variations users would actually say, such as 'A/B test', 'split test', 'experiment results', 'test significance'.

DimensionReasoningScore

Specificity

The description provides no concrete actions whatsoever. It only states it is an 'ab test analyzer' and belongs to 'Data Analytics' but never describes what it actually does (e.g., calculate statistical significance, compare conversion rates, visualize results).

1 / 3

Completeness

Neither the 'what does this do' nor the 'when should Claude use it' questions are meaningfully answered. The description only names the skill category and repeats the skill name as triggers, with no explicit guidance on when to select this skill.

1 / 3

Trigger Term Quality

The only trigger terms listed are 'ab test analyzer' repeated twice. It misses natural variations users would say like 'A/B test', 'split test', 'experiment results', 'conversion rate', 'statistical significance', 'variant comparison', etc.

1 / 3

Distinctiveness Conflict Risk

The phrase 'Data Analytics' is extremely broad and could overlap with many other analytics-related skills. Without specific actions or distinct trigger terms, this skill would be difficult to distinguish from other data analysis skills.

1 / 3

Total

4

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is an empty template with no actual content. It contains zero actionable information about A/B test analysis—no statistical methods (e.g., chi-squared tests, confidence intervals), no SQL examples, no visualization guidance, and no concrete workflows. It is entirely boilerplate that repeats the skill name without teaching Claude anything.

Suggestions

Add concrete, executable code examples for A/B test analysis (e.g., Python with scipy.stats for significance testing, SQL queries for extracting experiment data, sample size calculations).

Define a clear multi-step workflow: data extraction → metric calculation → statistical significance testing → result interpretation, with validation checkpoints at each stage.

Remove all boilerplate sections (Purpose, When to Use, Example Triggers, Capabilities) that describe the skill meta-information rather than providing actual instructions.

Include specific examples with sample input data and expected output (e.g., conversion rates, p-values, confidence intervals) so Claude knows exactly what to produce.

DimensionReasoningScore

Conciseness

The content is entirely filler and boilerplate. It explains nothing Claude doesn't already know, repeats 'ab test analyzer' excessively, and provides zero substantive information about how to actually analyze A/B tests.

1 / 3

Actionability

There is no concrete guidance whatsoever—no code, no commands, no statistical methods, no SQL queries, no example analyses. Every section is vague and abstract, describing what the skill supposedly does rather than instructing how to do it.

1 / 3

Workflow Clarity

No workflow, steps, or process is defined. The skill claims to provide 'step-by-step guidance' but contains none. There are no validation checkpoints or any sequenced instructions.

1 / 3

Progressive Disclosure

The content is a flat, repetitive document with no meaningful structure. There are no references to detailed files, no examples section, and no navigation to deeper content. The sections that exist are superficial headers over empty platitudes.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.