CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-framework-azure-ai-py

Build persistent agents on Azure AI Foundry using the Microsoft Agent Framework Python SDK.

64

Quality

56%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/antigravity-agent-framework-azure-ai-py/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear and distinctive niche (Azure AI Foundry + Microsoft Agent Framework Python SDK), which minimizes conflict risk. However, it lacks a 'Use when...' clause entirely and provides only a high-level action ('Build persistent agents') without enumerating specific capabilities, significantly weakening its completeness and specificity.

Suggestions

Add an explicit 'Use when...' clause with trigger terms like 'Use when the user asks about building agents on Azure AI Foundry, creating persistent AI agents with Microsoft Agent Framework, or using the azure-ai-projects Python SDK.'

List specific concrete actions the skill covers, such as 'create agent threads, configure tools (code interpreter, file search, Bing grounding), manage agent runs, and handle streaming responses.'

Include common user-facing keyword variations like 'Azure agents', 'AI agent SDK', 'azure-ai-projects', and 'foundry agents' to improve trigger term coverage.

DimensionReasoningScore

Specificity

Names the domain (Azure AI Foundry, Microsoft Agent Framework Python SDK) and a general action ('Build persistent agents'), but does not list multiple specific concrete actions like configuring tools, managing agent state, deploying endpoints, etc.

2 / 3

Completeness

Describes what the skill does ('Build persistent agents on Azure AI Foundry') but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and since the 'what' is also only partially described, this scores a 1.

1 / 3

Trigger Term Quality

Includes relevant keywords like 'Azure AI Foundry', 'Microsoft Agent Framework', 'Python SDK', and 'persistent agents', but misses common user variations such as 'Azure agents', 'AI agent', 'foundry agent', 'azure-ai-agent', or mentioning specific capabilities like tool use, file search, or code interpreter.

2 / 3

Distinctiveness Conflict Risk

The combination of 'Azure AI Foundry', 'Microsoft Agent Framework', and 'Python SDK' creates a very specific niche that is unlikely to conflict with other skills. This is clearly distinguishable from general coding skills or other cloud platform skills.

3 / 3

Total

8

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured, highly actionable skill with excellent executable code examples covering the full range of Azure AI Agent Framework capabilities. Its main weaknesses are some redundancy (the complete example repeats earlier patterns) and the absence of error handling/validation guidance for cloud service operations. The boilerplate Limitations/When to Use sections waste tokens without adding value.

Suggestions

Add error handling guidance for common failure modes (authentication errors, agent creation failures, network issues) with explicit validation checkpoints.

Remove or significantly trim the 'Complete Example' section since it largely repeats patterns already demonstrated, or replace it with a genuinely novel integration pattern.

Remove the boilerplate 'When to Use' and 'Limitations' sections — they add no actionable information and waste tokens.

DimensionReasoningScore

Conciseness

The skill is mostly efficient with good code examples, but is somewhat verbose — the complete example at the end largely repeats patterns already shown in earlier sections, and the Limitations/When to Use sections are boilerplate that add no value. Some redundancy could be trimmed.

2 / 3

Actionability

Every section provides fully executable, copy-paste ready Python code with correct imports, async patterns, and concrete examples covering basic agents, function tools, hosted tools, streaming, threads, and structured outputs.

3 / 3

Workflow Clarity

The skill shows individual patterns clearly but lacks explicit validation checkpoints — there's no guidance on verifying agent creation succeeded, handling authentication failures, or error recovery. For a workflow involving cloud service interactions, missing error handling/validation caps this at 2.

2 / 3

Progressive Disclosure

The skill provides a clear overview with well-organized sections progressing from basic to advanced, and references four specific files (tools.md, mcp.md, threads.md, advanced.md) for deeper content — all one level deep and clearly signaled.

3 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
boisenoise/skills-collections
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.