CtrlK
BlogDocsLog inGet started
Tessl Logo

cost-anomaly-detection

Use when proactively scanning for cost anomalies, unusual spending, unexpected charges, or irregular patterns — during weekly reviews, after incidents, or when something looks off

63

1.25x
Quality

44%

Does it follow best practices?

Impact

100%

1.25x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/cost-analyst/skills/cost-anomaly-detection/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

54%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is essentially a 'Use when...' clause without a corresponding 'what it does' section. While it excels at providing natural trigger terms and scenarios for when to activate the skill, it completely fails to describe what concrete actions the skill performs. The description needs to be restructured to first state capabilities, then provide the trigger guidance.

Suggestions

Add a concrete 'what it does' clause before the 'Use when' section, e.g., 'Analyzes cloud billing data, identifies cost spikes, compares spending trends across services, and generates anomaly reports.'

Specify the domain more clearly (e.g., AWS/GCP/Azure costs, SaaS subscriptions, infrastructure spending) to improve distinctiveness and reduce conflict risk with other financial analysis skills.

Restructure to follow the pattern: '[Concrete actions]. Use when [triggers].' rather than leading with only the trigger clause.

DimensionReasoningScore

Specificity

The description lacks concrete actions. It mentions 'scanning for cost anomalies, unusual spending, unexpected charges, or irregular patterns' but doesn't specify what concrete actions the skill performs (e.g., 'generates reports', 'queries billing APIs', 'compares month-over-month spend'). The language is abstract and doesn't describe what the skill actually does.

1 / 3

Completeness

The 'when' is well-covered with explicit trigger scenarios ('during weekly reviews, after incidents, or when something looks off'), but the 'what' is essentially missing — it says to use it when scanning but never explains what the skill actually does or produces. The description is entirely a 'Use when...' clause without a 'what it does' clause.

2 / 3

Trigger Term Quality

Strong natural trigger terms are present: 'cost anomalies', 'unusual spending', 'unexpected charges', 'irregular patterns', 'weekly reviews', 'incidents', 'something looks off'. These are phrases users would naturally use when concerned about cloud/infrastructure costs.

3 / 3

Distinctiveness Conflict Risk

The focus on cost anomalies and spending patterns provides some distinctiveness, but without specifying the domain (cloud costs? SaaS billing? general finance?) or concrete actions, it could overlap with general financial analysis or monitoring skills.

2 / 3

Total

8

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive in coverage but severely bloated, with the output template and reference material consuming the majority of the document. The actual actionable workflow is buried among extensive placeholder tables, exhaustive anomaly type catalogs, and general cloud knowledge that Claude already possesses. The skill would benefit enormously from extracting the output template and anomaly type reference into separate files, leaving a lean procedural core.

Suggestions

Extract the output format template (sections 1-11) into a separate reference file like `anomaly-report-template.md` and reference it with a single link, reducing the SKILL.md by ~60%.

Move 'Common Anomaly Types', 'Advanced Techniques', and 'Tips for Effective Anomaly Detection' into a separate reference file — these are general knowledge that Claude largely already has.

Replace pseudocode API calls (get_cost_data) with actual tool names and parameter schemas so the guidance is directly executable.

Add explicit validation checkpoints within the workflow, e.g., 'If baseline period has fewer than 14 data points, extend the range before proceeding' and 'After Step 2, if no total-cost anomalies exceed 1 std dev, note low anomaly likelihood before continuing deeper analysis.'

DimensionReasoningScore

Conciseness

Extremely verbose at ~400+ lines. Massive output template sections with placeholder tables, exhaustive lists of anomaly types, causes, and indicators that Claude already knows. The 'Common Anomaly Types' and 'Advanced Techniques' sections explain general cloud cost concepts rather than providing tool-specific guidance. The output format alone is over 200 lines of template that could be condensed to a brief structural outline.

1 / 3

Actionability

Contains some executable Python snippets (z-score calculation, statistical detection) and pseudocode-style API calls, but the API calls use invented function signatures (get_cost_data) without specifying actual tool names or parameters. Many steps are descriptive ('Look for:', 'Calculate for each dimension') rather than providing concrete executable code.

2 / 3

Workflow Clarity

The 11-step workflow is clearly sequenced and logically ordered from baseline establishment through multi-dimensional analysis. However, there are no validation checkpoints or feedback loops — no step says 'verify the baseline data is sufficient before proceeding' or 'if no anomalies found at this level, skip deeper analysis.' For a destructive-adjacent operation (presenting anomaly findings that could trigger action), the lack of false-positive verification steps within the workflow (rather than just at the end) is a gap.

2 / 3

Progressive Disclosure

References external files (best-practices.md, cloudzero-tools-reference.md, etc.) in a 'See Also' section, which is good. However, the SKILL.md itself is monolithic — the enormous output format template, common anomaly types catalog, and advanced techniques sections should be in separate reference files rather than inline. The content that is inline vastly exceeds what belongs in an overview skill file.

2 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (622 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
Cloudzero/cloudzero-claude-marketplace
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.