CtrlK
BlogDocsLog inGet started
Tessl Logo

azure-kusto

Query and analyze data in Azure Data Explorer (Kusto/ADX) using KQL for log analytics, telemetry, and time series analysis. WHEN: KQL queries, Kusto database queries, Azure Data Explorer, ADX clusters, log analytics, time series data, IoT telemetry, anomaly detection.

73

Quality

66%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.github/skills/azure-kusto/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description with excellent trigger term coverage and a clear WHEN clause that makes it easy for Claude to select appropriately. The main weakness is that the 'what' portion could be more specific about concrete actions beyond 'query and analyze data.' The distinctiveness is excellent given the specificity of the technology domain.

Suggestions

Expand the capability description with more specific actions, e.g., 'Write and optimize KQL queries, build summarize/render pipelines, perform joins across tables, detect anomalies in time series data.'

DimensionReasoningScore

Specificity

Names the domain (Azure Data Explorer/Kusto/KQL) and some actions ('query and analyze data'), but doesn't list multiple specific concrete actions like creating queries, joining tables, building dashboards, or specific KQL operations.

2 / 3

Completeness

Clearly answers both 'what' (query and analyze data in Azure Data Explorer using KQL for log analytics, telemetry, and time series analysis) and 'when' (explicit WHEN clause listing trigger scenarios like KQL queries, Kusto database queries, ADX clusters, etc.).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say: 'KQL queries', 'Kusto database', 'Azure Data Explorer', 'ADX clusters', 'log analytics', 'time series data', 'IoT telemetry', 'anomaly detection' — these cover the main variations a user might use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive — Azure Data Explorer, KQL, Kusto, and ADX are very specific technologies unlikely to conflict with other skills. The combination of these terms creates a clear, unique niche.

3 / 3

Total

11

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill has good structural organization and progressive disclosure with well-signaled references, but suffers from significant verbosity—explaining concepts Claude already knows (what ADX is, what use cases exist, activation triggers) and duplicating best practices across two sections. Actionability is weakened by deferring all concrete KQL examples to reference files while filling the main body with general descriptions and obvious advice.

Suggestions

Remove or drastically reduce the 'Skill Activation Triggers', 'Overview', 'Use Cases', 'Key Data Fields', and 'Result Format' sections—Claude already knows these concepts and they consume tokens without adding value.

Add 2-3 concrete, executable KQL query examples directly in the main skill body (e.g., a basic query, an aggregation, and a time series query) rather than deferring all examples to references.

Merge the two overlapping best practices sections ('KQL Best Practices' and 'Best Practices') into a single concise list of 4-5 critical items.

Add a validation checkpoint in the Core Workflow, such as checking row counts or verifying time range coverage before proceeding to analysis.

DimensionReasoningScore

Conciseness

Significant verbosity throughout. The 'Skill Activation Triggers' section with 9 bullet examples and 8 key indicators is unnecessary—Claude can infer when to use the skill. The 'Overview' section explains what Azure Data Explorer is (Claude already knows). 'Best Practices' and 'KQL Best Practices' are largely redundant sections with overlapping content. 'Use Cases' lists obvious applications. 'Key Data Fields' and 'Result Format' sections describe things Claude already understands.

1 / 3

Actionability

The MCP tools table with parameters is concrete and useful. However, there are no executable KQL query examples in the main skill body—all examples are deferred to references/query-patterns.md. The best practices are general advice rather than specific, executable guidance. The tool parameters are listed but no example invocations are shown.

2 / 3

Workflow Clarity

The 'Core Workflow' provides a 4-step sequence (Discover → Explore → Query → Analyze) but lacks validation checkpoints. There's no guidance on verifying query results, handling partial failures, or feedback loops for query optimization. The fallback strategy is mentioned but deferred entirely to a reference file.

2 / 3

Progressive Disclosure

Good use of reference files with clear signaling—query-patterns.md and fallback-strategy.md are referenced with specific 'when to load' guidance. The reference index table at the bottom with 'Load these on demand — do NOT read all at once' is well-structured progressive disclosure. References are one level deep.

3 / 3

Total

8

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
jonathan-vella/azure-agentic-infraops
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.