CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

detecting-data-anomalies

tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill detecting-data-anomalies

Process identify anomalies and outliers in datasets using machine learning algorithms. Use when analyzing data for unusual patterns, outliers, or unexpected deviations from normal behavior. Trigger with phrases like "detect anomalies", "find outliers", or "identify unusual patterns".

58%

Overall

SKILL.md
Review
Evals

Validation

81%
CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

metadata_version

'metadata' field is not a dictionary

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

13

/

16

Passed

Implementation

22%

This skill content is too abstract and lacks the concrete, executable guidance needed for effective anomaly detection. It reads more like a high-level outline than actionable instructions, with vague steps that don't provide specific algorithms, code examples, or threshold determination methods. The heavy reliance on external reference files without a functional quick-start section makes it difficult to use immediately.

Suggestions

Add executable Python code examples showing at least one complete anomaly detection workflow (e.g., Isolation Forest with scikit-learn) including data loading, preprocessing, model fitting, and threshold selection.

Fix the workflow numbering (currently restarts at 1 mid-process) and add explicit validation checkpoints, such as verifying data quality before algorithm application and validating anomaly counts against expected contamination rates.

Replace vague instructions like 'Apply selected algorithm' with specific guidance on algorithm selection criteria (e.g., 'Use Isolation Forest for high-dimensional data, LOF for density-based clustering').

Include a minimal working example in the main skill file rather than deferring all examples to external references, so users can immediately understand the expected workflow.

DimensionReasoningScore

Conciseness

The skill includes some unnecessary padding like 'This skill provides automated assistance for the described functionality' and overly detailed prerequisites that Claude would already understand. However, it's not excessively verbose.

2 / 3

Actionability

The instructions are vague and abstract with no executable code examples. Steps like 'Apply selected algorithm using Bash tool' and 'Normalize or scale features' provide no concrete implementation. No actual Python code or specific commands are provided.

1 / 3

Workflow Clarity

Steps are numbered but poorly organized (numbering restarts mid-workflow), lack validation checkpoints, and provide no feedback loops for error recovery. Critical details like which algorithm to select or how to determine thresholds are missing.

1 / 3

Progressive Disclosure

References to external files (implementation.md, errors.md, examples.md) are present and one-level deep, but the main content is too thin to serve as a useful overview. The skill offloads too much to external files without providing actionable quick-start content.

2 / 3

Total

6

/

12

Passed

Activation

90%

This is a well-structured skill description with excellent trigger term coverage and clear 'Use when' guidance. The main weakness is the somewhat generic capability description - it mentions ML algorithms but doesn't specify which techniques or what concrete outputs the skill produces. The explicit trigger phrases section is a strong addition that aids skill selection.

Suggestions

Add specific concrete actions like 'calculate statistical outliers', 'apply isolation forest algorithms', 'generate anomaly scores', or 'visualize outlier distributions' to improve specificity.

DimensionReasoningScore

Specificity

Names the domain (anomaly detection, machine learning) and general actions (identify anomalies, outliers), but lacks specific concrete actions like 'calculate z-scores', 'apply isolation forests', or 'generate anomaly reports'.

2 / 3

Completeness

Clearly answers both what (identify anomalies and outliers using ML algorithms) and when (explicit 'Use when' clause for analyzing unusual patterns, plus explicit 'Trigger with phrases' providing concrete examples).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say: 'detect anomalies', 'find outliers', 'identify unusual patterns', 'unusual patterns', 'outliers', 'unexpected deviations'. These are terms users naturally use when seeking this functionality.

3 / 3

Distinctiveness Conflict Risk

Clear niche focused specifically on anomaly/outlier detection with distinct triggers. Unlikely to conflict with general data analysis or other ML skills due to specific terminology around anomalies, outliers, and unusual patterns.

3 / 3

Total

11

/

12

Passed

Reviewed

Table of Contents

ValidationImplementationActivation

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.