Anomaly Detector - Auto-activating skill for Data Analytics. Triggers on: anomaly detector, anomaly detector Part of the Data Analytics skill category.
33
0%
Does it follow best practices?
Impact
93%
0.97xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/12-data-analytics/anomaly-detector/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is extremely weak across all dimensions. It is essentially a placeholder that names the skill and its category but provides no concrete actions, no meaningful trigger terms, and no guidance on when Claude should select it. It would be nearly useless for skill selection among multiple available skills.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Detects outliers and anomalies in time-series data, flags unusual patterns, identifies statistical deviations, and generates anomaly reports.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about outliers, unusual data points, anomaly detection, spike detection, data irregularities, or unexpected patterns in datasets.'
Remove the duplicate trigger term and expand with natural language variations users would actually say, such as 'outlier', 'abnormal values', 'unexpected behavior', 'data drift', or 'suspicious data points'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description only names the skill ('Anomaly Detector') and its category ('Data Analytics') but provides no concrete actions. There is no mention of what the skill actually does—no verbs like 'detect', 'flag', 'analyze', or any specific capabilities. | 1 / 3 |
Completeness | The description fails to answer both 'what does this do' and 'when should Claude use it'. There is no explanation of capabilities and no explicit 'Use when...' clause—only a redundant trigger phrase and a category label. | 1 / 3 |
Trigger Term Quality | The only trigger terms listed are 'anomaly detector' repeated twice. There are no natural variations a user might say such as 'outlier detection', 'unusual patterns', 'spike detection', 'data anomalies', or 'find outliers'. | 1 / 3 |
Distinctiveness Conflict Risk | Being labeled generically under 'Data Analytics' with no specific scope or distinct triggers means this could easily conflict with any other data analytics skill. Nothing distinguishes it from general data analysis, statistical analysis, or other analytics skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is essentially a placeholder with no substantive content. It contains no actionable instructions, no code examples, no specific anomaly detection techniques (e.g., Z-score, IQR, isolation forests, DBSCAN), and no workflow guidance. Every section restates the skill name without adding value.
Suggestions
Add concrete, executable code examples for at least one anomaly detection method (e.g., Z-score in SQL, isolation forest in Python) with sample input/output.
Define a clear workflow: data preparation → method selection → detection → validation → reporting, with explicit validation checkpoints.
Remove all generic filler ('Provides step-by-step guidance', 'Follows industry best practices') and replace with specific techniques, thresholds, and decision criteria for choosing between anomaly detection approaches.
Add references to supporting files or inline sections for advanced topics like multivariate anomaly detection, time-series anomalies, or domain-specific tuning.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is padded with generic filler that tells Claude nothing it doesn't already know. Phrases like 'Provides step-by-step guidance' and 'Follows industry best practices' are vacuous. The entire file could be replaced with a few actionable lines. | 1 / 3 |
Actionability | There is zero concrete guidance—no code, no commands, no algorithms, no specific anomaly detection techniques, no SQL examples, no statistical methods. It describes what the skill does in abstract terms without ever instructing Claude how to do anything. | 1 / 3 |
Workflow Clarity | No workflow, steps, or process is defined. There are no validation checkpoints, no sequencing, and no indication of how to actually perform anomaly detection end-to-end. | 1 / 3 |
Progressive Disclosure | The content is a monolithic block of generic text with no references to supporting files, no structured navigation, and no separation of overview from detail. There are no bundle files to reference either, but the content itself doesn't even attempt meaningful organization. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.