CtrlK
BlogDocsLog inGet started
Tessl Logo

custom-dimension-analysis

Use when analyzing costs by organization-specific dimensions like teams, products, business units, or applications for showback, chargeback, or business-aligned cost reporting

48

Quality

53%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/cost-analyst/skills/custom-dimension-analysis/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

72%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description excels at trigger term coverage and distinctiveness, with domain-specific terms like 'showback', 'chargeback', and organizational dimensions that clearly define its niche. However, it lacks explicit capability statements — it tells Claude when to use the skill but not what the skill actually does (e.g., generate reports, allocate costs, create visualizations). Adding concrete actions would significantly improve it.

Suggestions

Add explicit capability statements before the 'Use when' clause, e.g., 'Allocates and reports cloud costs across organizational dimensions, generates showback/chargeback reports, and maps spending to teams or products.'

Include the 'what it does' component with concrete actions like 'tag resources', 'generate cost allocation reports', 'create cost breakdowns by team/product' to complement the strong trigger guidance.

DimensionReasoningScore

Specificity

The description names the domain (cost analysis by organizational dimensions) and mentions some actions implicitly (showback, chargeback, cost reporting), but doesn't list concrete specific actions like 'generate reports', 'allocate costs', 'create dashboards', or 'tag resources'.

2 / 3

Completeness

The description starts with 'Use when' which addresses the 'when' aspect well, but the 'what does this do' part is weak — it describes the context/trigger but doesn't clearly state what concrete actions or outputs the skill produces.

2 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'costs', 'teams', 'products', 'business units', 'applications', 'showback', 'chargeback', 'cost reporting'. These cover a good range of terms a user would naturally use when needing this skill.

3 / 3

Distinctiveness Conflict Risk

The combination of organization-specific cost dimensions (teams, products, business units) with showback/chargeback terminology creates a clear niche that is unlikely to conflict with general cost analysis or other financial skills.

3 / 3

Total

10

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a thorough framework for custom dimension cost analysis but suffers severely from verbosity. The 11-section output format template with placeholder tables, 5 common patterns, advanced techniques, and tips sections bloat the file far beyond what Claude needs. The core workflow (Steps 1-9) and API examples are useful but are buried in excessive templating that Claude could generate independently.

Suggestions

Reduce the output format section to a brief description of key sections (executive summary, cost breakdown, trends, recommendations) rather than full placeholder tables — Claude can format tables without templates.

Move 'Common Custom Dimension Patterns', 'Advanced Techniques', and 'Tips for Effective Analysis' to a separate reference file to keep the SKILL.md focused on the core procedure.

Add explicit validation steps after API calls (e.g., 'Verify dimension values returned are non-empty before proceeding' or 'If unallocated > 50%, flag data quality issue').

Replace pseudocode snippets like `total_cost = ... # from get_cost_data()` with complete executable examples showing how to extract values from actual API response structures.

DimensionReasoningScore

Conciseness

Extremely verbose at ~400+ lines. The massive output format templates (11 sections with placeholder tables), common patterns, advanced techniques, and tips sections are largely things Claude can generate on its own. The showback report template alone is unnecessary padding. Much of this is describing what to analyze rather than how to use the specific API.

1 / 3

Actionability

Provides concrete API call examples (get_cost_data, get_dimension_values) with specific parameters, and includes some executable Python snippets. However, many code blocks are pseudocode-like (e.g., `total_cost = ... # from get_cost_data()`), and the output format sections are templates rather than executable guidance. The optimization scoring formula is pseudocode.

2 / 3

Workflow Clarity

Steps 1-9 provide a clear sequence for the analysis workflow, and the prerequisite to load org context first is well-stated. However, there are no explicit validation checkpoints or error recovery steps. For a multi-step process involving API calls that could fail or return unexpected data, the absence of validation/feedback loops is a gap.

2 / 3

Progressive Disclosure

References to external files (best-practices.md, dimensions-reference.md, etc.) are well-signaled in the 'See Also' section. However, the SKILL.md itself is monolithic — the enormous output format templates, 5 common patterns, advanced techniques, and tips sections should be in separate reference files rather than inline. The content that should be split out overwhelms the core procedure.

2 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (572 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
Cloudzero/cloudzero-claude-marketplace
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.