CtrlK
BlogDocsLog inGet started
Tessl Logo

azure-governance-discovery

Deterministic Azure Policy discovery: lists effective policy assignments at subscription scope (including MG-inherited), pulls definitions and exemptions, classifies effects, filters Defender auto-assignments, and emits the governance-constraints JSON envelope via a Python script. USE FOR: 04g-Governance Phase 1 discovery, refreshing `04-governance-constraints.json`. DO NOT USE FOR: artifact writing, architecture mapping, traffic-light rendering, challenger orchestration — those stay in the parent agent.

83

Quality

78%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.github/skills/azure-governance-discovery/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

85%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong, well-structured skill description that clearly defines specific capabilities, explicit trigger conditions, and explicit exclusions. Its main weakness is that the trigger terms are heavily technical and workflow-specific (e.g., '04g-Governance Phase 1 discovery'), which may not match natural user language but could be appropriate for an internal/automated workflow context. The DO NOT USE FOR clause is an excellent addition for reducing conflict risk.

Suggestions

Consider adding more natural-language trigger terms alongside the workflow codes, e.g., 'Azure compliance check', 'policy audit', or 'list Azure policies' to improve discoverability from varied user phrasings.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: lists effective policy assignments at subscription scope, pulls definitions and exemptions, classifies effects, filters Defender auto-assignments, and emits a governance-constraints JSON envelope via a Python script.

3 / 3

Completeness

Clearly answers both 'what' (lists policy assignments, pulls definitions/exemptions, classifies effects, filters Defender auto-assignments, emits JSON) and 'when' (USE FOR: 04g-Governance Phase 1 discovery, refreshing governance-constraints.json). Also includes explicit DO NOT USE FOR guidance, which further clarifies boundaries.

3 / 3

Trigger Term Quality

Includes relevant domain terms like 'Azure Policy', 'policy assignments', 'subscription scope', 'Defender', 'governance-constraints JSON', but these are fairly technical/jargon-heavy. Missing more natural user-facing terms a user might say (e.g., 'compliance', 'policy audit', 'Azure governance'). The trigger terms are more implementation-specific than user-query-oriented.

2 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche (Azure Policy discovery at subscription scope) and explicit boundary-setting via the DO NOT USE FOR clause that distinguishes it from artifact writing, architecture mapping, traffic-light rendering, and challenger orchestration. Very unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured skill with strong actionability — concrete CLI usage, exit codes, output format, and test commands are all clearly documented. The progressive disclosure is well-handled with appropriate references to deeper documentation. The main weaknesses are some verbosity in the output contract section (which could live in the referenced schema.md) and missing explicit error-recovery workflow steps for partial/failed discovery runs.

Suggestions

Add an explicit workflow sequence showing what the parent agent should do for each exit code (e.g., 'If exit 1: read status JSON, surface warnings to user, decide whether to proceed')

Move the detailed Output Contract section (findings fields, envelope structure) to references/schema.md and keep only a brief summary in the main skill to improve conciseness

DimensionReasoningScore

Conciseness

The content is mostly efficient and avoids explaining basic concepts, but some sections like Design Notes and the detailed Output Contract could be trimmed or moved to reference files. The explanation of what the parent agent does is repeated across multiple sections.

2 / 3

Actionability

Provides a fully executable CLI command with clear flag documentation, exit codes, expected stdout JSON output, and test commands. Everything is copy-paste ready with specific paths and concrete examples.

3 / 3

Workflow Clarity

The skill describes a single-command invocation clearly, but the workflow of how to handle partial failures (exit code 1) or errors (exit code 2) lacks explicit recovery steps or feedback loops. The relationship between running the script, checking status, and proceeding is implied but not sequenced with validation checkpoints.

2 / 3

Progressive Disclosure

Clear overview in the main file with well-signaled one-level-deep references to `references/effect-classification.md`, `references/schema.md`, and the schema JSON file. Content is appropriately split between the main skill and reference documents.

3 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
jonathan-vella/azure-agentic-infraops
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.