This skill empowers Claude to identify anomalies and outliers within datasets. It leverages the anomaly-detection-system plugin to analyze data, apply appropriate machine learning algorithms, and highlight unusual data points. Use this skill when the user requests anomaly detection, outlier analysis, or identification of unusual patterns in data. Trigger this skill when the user mentions "anomaly detection," "outlier analysis," "unusual data," or requests insights into data irregularities.
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill detecting-data-anomalies55
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid skill description with excellent trigger term coverage and completeness. The main weakness is the use of first-person framing ('empowers Claude') and somewhat vague capability descriptions that could be more concrete about specific detection methods or output formats. The explicit 'Use this skill when' and 'Trigger this skill when' clauses are well-constructed.
Suggestions
Replace vague actions like 'apply appropriate machine learning algorithms' with specific methods (e.g., 'detect outliers using statistical methods, isolation forests, or clustering-based approaches')
Rewrite to use third-person voice throughout - change 'This skill empowers Claude to identify' to 'Identifies anomalies and outliers within datasets'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (anomaly detection) and some actions ('analyze data', 'apply machine learning algorithms', 'highlight unusual data points'), but lacks specific concrete actions like what types of algorithms or what format outputs take. | 2 / 3 |
Completeness | Clearly answers both what ('identify anomalies and outliers', 'analyze data', 'apply ML algorithms', 'highlight unusual data points') and when ('Use this skill when...', 'Trigger this skill when...') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes good coverage of natural terms: 'anomaly detection', 'outlier analysis', 'unusual data', 'data irregularities', 'unusual patterns' - these are terms users would naturally say when needing this capability. | 3 / 3 |
Distinctiveness Conflict Risk | Has a clear niche focused specifically on anomaly/outlier detection with distinct triggers that wouldn't overlap with general data analysis or other ML skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
20%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content reads like a marketing description rather than actionable instructions. It explains what anomaly detection is and when to use it, but fails to show Claude how to actually invoke the anomaly-detection-system plugin with specific commands, parameters, or code examples. The content assumes Claude needs to learn basic ML concepts rather than providing the concrete integration details needed to use the plugin.
Suggestions
Add concrete plugin invocation examples showing exact syntax: e.g., `anomaly_detection.analyze(data, algorithm='isolation_forest', threshold=0.95)`
Remove the 'How It Works' and 'Best Practices' sections that explain generic ML concepts Claude already knows
Provide specific parameter options and their effects (e.g., which algorithms are available, what threshold values mean)
Include a validation step showing how to verify detection results and handle edge cases like insufficient data
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Verbose and explains concepts Claude already knows (what anomaly detection is, what algorithms exist, basic ML concepts). The 'How It Works' section describes obvious steps, and 'Best Practices' contains generic ML advice that adds no unique value. | 1 / 3 |
Actionability | No concrete code, commands, or executable guidance. Examples describe what the skill 'will do' abstractly rather than showing actual plugin invocations, API calls, or specific parameters. Everything is descriptive rather than instructive. | 1 / 3 |
Workflow Clarity | Steps are listed in 'How It Works' but lack specifics on how to actually invoke the plugin, what parameters to pass, or validation checkpoints. No error handling or feedback loops for when detection fails or produces unexpected results. | 2 / 3 |
Progressive Disclosure | Content is organized into sections but everything is inline in one file. No references to external documentation for the plugin API, algorithm details, or advanced configuration. The structure exists but content that could be separated (algorithm selection guide, data preprocessing steps) is mixed in. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
68%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
body_output_format | No obvious output/return/format terms detected; consider specifying expected outputs | Warning |
Total | 11 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.