Use this skill when the user asks to "check data usage", "list TCO policies", "view quotas", "reduce Coralogix costs", "optimize observability spend", "lower our logging bill", "data budget exceeded", "TCO policy", "retention tier", "archive storage", "ingestion costs", "frequent search vs archive", "why is our bill so high", "spending too much on logs", "data retention settings", "quota rules", "cost analysis", "usage breakdown", "optimize log volume", "control data ingestion", "archive cold data", "billing units", "plan consumption", "daily plan", "overage", "PAYG", "usage anomaly", "usage trend", "cx_data_usage_units", or wants to investigate, analyze, or reduce Coralogix data costs.
68
61%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/cx-cost-optimization/SKILL.mdQuality
Discovery
44%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a long list of trigger phrases with no explanation of what the skill actually does. While it excels at providing natural keywords users would say and is clearly distinctive to the Coralogix cost/data domain, it completely fails to describe the skill's capabilities or concrete actions. The description uses second person ('Use this skill when the user asks') which is acceptable but the fundamental problem is the absence of any 'what' component.
Suggestions
Add a clear capability statement before the trigger list, e.g., 'Analyzes Coralogix data usage, lists and manages TCO policies, generates cost breakdowns, and recommends optimization strategies for reducing observability spend.'
Restructure to lead with concrete actions (what the skill does) followed by a 'Use when...' clause, rather than having the entire description be trigger phrases.
Trim the trigger phrase list to the most essential 8-10 terms to reduce verbosity while maintaining coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description lists no concrete actions or capabilities. It only provides trigger phrases but never states what the skill actually does (e.g., 'Analyzes Coralogix data usage', 'Lists TCO policies', 'Generates cost breakdowns'). There are no verbs describing the skill's capabilities. | 1 / 3 |
Completeness | While the 'when' is thoroughly covered with extensive trigger phrases, the 'what' is almost entirely missing. The description never explains what the skill actually does—it only says when to use it. This is a critical gap that makes it incomplete despite the strong trigger coverage. | 1 / 3 |
Trigger Term Quality | The description provides extensive coverage of natural keywords users would say, including variations like 'reduce Coralogix costs', 'why is our bill so high', 'spending too much on logs', 'data budget exceeded', 'usage breakdown', and technical terms like 'TCO policy', 'PAYG', 'cx_data_usage_units'. | 3 / 3 |
Distinctiveness Conflict Risk | The description is highly specific to Coralogix cost management and data usage, with domain-specific terms like 'TCO policy', 'retention tier', 'cx_data_usage_units', and 'Coralogix'. This is unlikely to conflict with other skills. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, highly actionable skill with excellent workflow clarity and concrete executable commands throughout. Its main weakness is length — the document packs substantial reference material (jq examples, PromQL queries, metrics definitions) inline rather than splitting into supporting files, which hurts both conciseness and progressive disclosure. The safety guardrails around write operations and the structured diagnostic workflow are particularly well done.
Suggestions
Extract the jq examples section and PromQL queries into separate reference files (e.g., JQ_EXAMPLES.md, PROMQL_REFERENCE.md) and link to them from the main skill to improve progressive disclosure and reduce token cost.
Remove duplicate commands that appear in both the workflow steps and the jq examples section — the workflow already demonstrates the key patterns, so the examples section should only add net-new queries.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient and avoids explaining basic concepts, but it's quite long (~200+ lines) with some redundancy between sections (e.g., the jq examples section repeats commands already shown in the workflow, and the common optimization patterns table partially duplicates workflow guidance). Some tightening is possible. | 2 / 3 |
Actionability | Excellent actionability throughout — every step includes copy-paste-ready bash commands with concrete flags, jq pipelines for analysis, and specific CLI patterns. The common optimization patterns table maps symptoms directly to diagnosis commands and remediation actions. | 3 / 3 |
Workflow Clarity | The 6-step cost investigation workflow is clearly sequenced with a logical progression (measure → review policies → check retention → review quotas → check archive → recommend). Validation is explicit ('Verify after changes: Re-run the diagnosis commands'), write safety is strongly emphasized with the --yes approval gate, and the UTC-day bucketing rules provide important guardrails for metrics analysis. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear section headers and tables, but it's a monolithic document that could benefit from splitting detailed reference material (jq examples, PromQL queries, metrics reference) into separate files. The single cross-reference to cx-telemetry-querying is good but the main file carries a lot of inline detail that could be offloaded. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
defdc4d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.