CtrlK
BlogDocsLog inGet started
Tessl Logo

cx-data-pipeline

Use this skill when the user asks to "set up parsing", "create parsing rule", "extract fields from logs", "regex extraction", "log parsing", "enrich logs", "add context to logs", "custom enrichment table", "lookup table", "geo enrichment", "create metric from logs", "events to metrics", "convert logs to metrics", "generate metrics from events", "recording rule", "precomputed metrics", "PromQL recording", "configure data pipeline", "transform log data", "data processing rules", "rule group", "enrichment settings", "E2M definition", "labels cardinality", "bulk delete rules", "enrichment limits", "search enrichment table", or wants to configure how Coralogix processes, enriches, or transforms ingested data.

79

Quality

74%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/cx-data-pipeline/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

72%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description excels at trigger term coverage with an exhaustive list of user phrases that would activate the skill, and it's clearly scoped to the Coralogix platform. However, it reads as a long list of trigger phrases rather than a well-structured skill description — it lacks a clear declarative statement of what the skill does (its capabilities), making it feel like a keyword index rather than a proper description.

Suggestions

Add a clear opening sentence describing what the skill does in third person, e.g., 'Configures Coralogix data processing pipelines including log parsing rules, field extraction, enrichment tables, and events-to-metrics conversions.'

Restructure to separate the 'what it does' from the 'when to use it' — lead with capabilities, then follow with 'Use when...' and the trigger terms.

DimensionReasoningScore

Specificity

The description mentions several domain-specific actions like 'parsing rules', 'regex extraction', 'enrichment tables', 'events to metrics', and 'recording rules', but it reads more like a keyword dump than a structured list of concrete capabilities. It lacks clear statements of what the skill actually does (e.g., 'creates parsing rules', 'configures enrichment tables').

2 / 3

Completeness

The description has a strong 'when' component via the extensive trigger phrase list and the closing clause about configuring Coralogix data processing. However, the 'what does this do' part is weak — it never clearly states the skill's capabilities in declarative form. It's essentially all triggers with minimal explanation of what the skill actually accomplishes.

2 / 3

Trigger Term Quality

The description includes an extensive list of natural trigger terms that users would actually say, covering many variations: 'set up parsing', 'create parsing rule', 'extract fields from logs', 'regex extraction', 'log parsing', 'enrich logs', 'custom enrichment table', 'lookup table', 'geo enrichment', 'create metric from logs', 'events to metrics', 'recording rule', 'PromQL recording', etc. This provides excellent keyword coverage.

3 / 3

Distinctiveness Conflict Risk

The description is clearly scoped to Coralogix data processing, enrichment, and transformation features. The specific platform name 'Coralogix' combined with domain-specific terms like 'E2M definition', 'PromQL recording', and 'enrichment tables' make it highly distinctive and unlikely to conflict with other skills.

3 / 3

Total

10

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable skill that covers four related data pipeline operations with clear workflows, concrete CLI commands, and verification steps. Its main weakness is moderate verbosity from repeating the template pattern across all workflows and some unnecessary guidance, plus the lack of supporting bundle files to offload detailed JSON payload schemas. The workflow clarity is strong with consistent verify-after-create patterns and limit-checking steps.

Suggestions

Consolidate the 'template from existing' pattern into the 'Working with JSON Payloads' section once and reference it from each workflow instead of repeating the get/edit/create steps in every section.

Remove obvious guidance like 'Decide the metric name, labels, and aggregation type before creating' — Claude can infer design-before-implementation without being told.

DimensionReasoningScore

Conciseness

Generally efficient with good use of tables and code blocks, but some sections include guidance Claude already knows (e.g., 'Decide the metric name, labels, and aggregation type before creating') and the 'Key Principles' section partially restates what was already shown in the workflows. The 'Working with JSON Payloads' section repeats the template pattern that is then shown again in every workflow.

2 / 3

Actionability

Provides concrete, executable CLI commands throughout with specific flags, piped jq transformations, and copy-paste ready examples. Each workflow has real commands with clear syntax, and the CLI command table gives a comprehensive reference.

3 / 3

Workflow Clarity

All four workflows are clearly sequenced with numbered steps, include verification/validation steps (query logs to confirm, check limits before creating), and provide feedback guidance (e.g., 'Avoid querying archive for verification - ingestion delays can cause false negatives'). The bulk-delete operation is mentioned with proper syntax. Each workflow ends with a verification step.

3 / 3

Progressive Disclosure

The content is well-structured with clear sections and a summary table, but it's a fairly long monolithic file (~150 lines of content) with no bundle files to offload detail into. The JSON payload structures for create/update operations could be split into reference files. The cross-reference to cx-telemetry-querying is well-signaled but the skill doesn't leverage any supporting files.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
coralogix/cx-cli
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.