Process identify anomalies and outliers in datasets using machine learning algorithms. Use when analyzing data for unusual patterns, outliers, or unexpected deviations from normal behavior. Trigger with phrases like "detect anomalies", "find outliers", or "identify unusual patterns".
74
70%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/ai-ml/anomaly-detection-system/skills/detecting-data-anomalies/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid skill description that clearly communicates its purpose and provides explicit trigger guidance. Its main weakness is the somewhat limited specificity of concrete actions—it could benefit from listing more specific capabilities like supported data formats, algorithms, or output types. The trigger terms and completeness are strong points.
Suggestions
Add more specific concrete actions such as 'applies isolation forests, statistical tests, and clustering-based methods' or 'generates anomaly scores, visualizations, and flagged records' to improve specificity.
Note: the first word 'Process' appears to be a fragment or typo ('Process identify anomalies')—clean up the grammar to read more clearly, e.g., 'Identifies anomalies and outliers in datasets using machine learning algorithms.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (anomaly detection in datasets using ML) and mentions some actions ('identify anomalies and outliers'), but doesn't list multiple specific concrete actions like which algorithms are used, what output formats are produced, or what types of datasets are supported. | 2 / 3 |
Completeness | Clearly answers both 'what' (identify anomalies and outliers in datasets using ML algorithms) and 'when' (explicit 'Use when' clause for analyzing data for unusual patterns, plus explicit trigger phrases). Both components are present and explicit. | 3 / 3 |
Trigger Term Quality | Good coverage of natural trigger terms: 'detect anomalies', 'find outliers', 'identify unusual patterns', 'unusual patterns', 'outliers', 'unexpected deviations from normal behavior'. These are terms users would naturally use when seeking this capability. | 3 / 3 |
Distinctiveness Conflict Risk | The description carves out a clear niche around anomaly/outlier detection specifically, with distinct trigger terms that are unlikely to conflict with general data analysis or other ML skills. The focus on 'anomalies', 'outliers', and 'unusual patterns' is quite specific. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a well-structured overview of anomaly detection with good algorithm selection guidance and a useful error handling table. However, it falls short on actionability by lacking any executable code examples for a code-heavy task, and the workflow lacks validation checkpoints critical for an iterative ML pipeline. The referenced bundle files don't exist, undermining the progressive disclosure structure.
Suggestions
Add executable Python code examples for at least one algorithm (e.g., Isolation Forest with StandardScaler), including model fitting, scoring, and threshold application — this is critical for a code-oriented ML skill.
Insert explicit validation checkpoints in the workflow: verify scaling output, check anomaly score distribution before thresholding, and add a feedback loop step for threshold refinement based on flagged results.
Either provide the referenced implementation.md and errors.md bundle files, or inline the most critical implementation details (code patterns) directly in the skill.
Trim the Resources section (Claude knows these libraries) and condense the examples into one concrete example with actual code rather than three descriptive scenarios.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient but includes some unnecessary verbosity. The Overview section restates what's already in the description. The Resources section lists general references Claude already knows about. The examples are descriptive rather than executable, adding bulk without proportional value. | 2 / 3 |
Actionability | The instructions provide a clear 10-step process with algorithm selection guidance, but lack any executable code examples. For a Python-based ML skill, the absence of concrete code snippets (e.g., fitting an Isolation Forest, applying StandardScaler) means Claude must generate everything from general knowledge rather than following copy-paste-ready patterns. | 2 / 3 |
Workflow Clarity | Steps are clearly sequenced and the algorithm selection decision tree is helpful. However, there are no explicit validation checkpoints — no step to verify scaling was applied correctly, no model performance check before generating the final report, and no feedback loop for threshold tuning. For a multi-step ML pipeline, these validation gaps are significant. | 2 / 3 |
Progressive Disclosure | The skill references `${CLAUDE_SKILL_DIR}/references/implementation.md` and `errors.md` which is good structure, but no bundle files are provided so these references are broken. The main file itself is somewhat long with the error table, examples, and resources all inline when some could be in referenced files. The references that do exist are well-signaled and one level deep. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.