CtrlK
BlogDocsLog inGet started
Tessl Logo

azure-diagnostics

Debug and troubleshoot production issues on Azure. Covers Container Apps and Function Apps diagnostics, log analysis with KQL, health checks, and common issue resolution for image pulls, cold starts, health probes, and function invocation failures. WHEN: debug production issues, troubleshoot container apps, troubleshoot function apps, troubleshoot Azure Functions, analyze logs with KQL, fix image pull failures, resolve cold start issues, investigate health probe failures, check resource health, view application logs, find root cause of errors, function app not working, function invocation failures.

83

Quality

78%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.github/skills/azure-diagnostics/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly defines its scope (Azure production debugging for Container Apps and Function Apps), lists specific capabilities (KQL log analysis, health checks, common issue resolution), and provides an explicit WHEN clause with comprehensive trigger terms. The description is well-structured, uses third person voice, and covers both natural user language and technical terms effectively.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: Container Apps and Function Apps diagnostics, log analysis with KQL, health checks, and resolution for image pulls, cold starts, health probes, and function invocation failures.

3 / 3

Completeness

Clearly answers both 'what' (debug/troubleshoot Azure production issues across Container Apps, Function Apps, KQL log analysis, health checks, common issue resolution) and 'when' with an explicit 'WHEN:' clause listing numerous trigger scenarios.

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: 'debug production issues', 'troubleshoot container apps', 'troubleshoot function apps', 'analyze logs with KQL', 'fix image pull failures', 'resolve cold start issues', 'function app not working', 'function invocation failures'. These are highly natural phrases a user would type.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche: Azure-specific production debugging for Container Apps and Function Apps. The combination of Azure, KQL, Container Apps, Function Apps, and specific failure types (image pulls, cold starts, health probes) makes it very unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

57%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a solid organizational structure with good progressive disclosure to reference materials, but the core diagnostic content is somewhat abstract and lacks the concrete decision trees and validation loops needed for effective production troubleshooting. The MCP tool examples use an ambiguous format, and there's redundancy between the Triggers/Rules sections and the duplicated reference listings.

Suggestions

Replace the abstract Quick Diagnosis Flow with concrete decision trees: e.g., 'If container app returns 503 → check health probes first → run `az containerapp show ...` → if ingress misconfigured, then...'

Use proper executable syntax for MCP tool invocations (JSON or actual function call format) instead of the current pseudo-format

Remove the duplicate reference listings — keep either the References section or the Reference Index table, not both

Remove the 'Triggers' and 'Rules' sections (or fold key rules into the workflow) since they add tokens without actionable value

DimensionReasoningScore

Conciseness

The skill has some unnecessary sections like 'Triggers' and 'Rules' that restate what Claude can infer. The 'AUTHORITATIVE GUIDANCE — MANDATORY COMPLIANCE' banner is verbose filler. The reference index table at the bottom largely duplicates the References section above it. However, the core diagnostic content is reasonably lean.

2 / 3

Actionability

Provides concrete CLI commands and MCP tool invocations, but the MCP examples use a pseudo-format that isn't clearly executable (no JSON or actual function call syntax). The Quick Diagnosis Flow is abstract ('What's failing?', 'What do logs show?') rather than providing specific actionable steps. The CLI commands use placeholders but are mostly copy-paste ready.

2 / 3

Workflow Clarity

The Quick Diagnosis Flow provides a sequence but lacks validation checkpoints and feedback loops. There's no explicit 'if X then do Y' branching logic for different failure modes. For a troubleshooting skill involving production systems, the absence of verification steps (e.g., confirming a fix worked) and error recovery loops caps this at 2.

2 / 3

Progressive Disclosure

Excellent progressive disclosure with a clear overview in the main file, a well-organized table mapping services to reference files, and a reference index with explicit 'when to load' guidance. References are one level deep and clearly signaled with both a narrative list and a structured table.

3 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
jonathan-vella/azure-agentic-infraops
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.