Build persistent agents on Azure AI Foundry using the Microsoft Agent Framework Python SDK.
64
56%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-agent-framework-azure-ai-py/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear and distinctive niche (Azure AI Foundry + Microsoft Agent Framework Python SDK), which minimizes conflict risk. However, it lacks a 'Use when...' clause entirely and provides only a single high-level action rather than listing specific concrete capabilities, making it insufficient for Claude to reliably select this skill in a large skill library.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user asks about building AI agents on Azure, using the Microsoft Agent Framework, or working with azure-ai-projects SDK.'
List specific concrete actions the skill covers, such as 'create agents with tool integrations (code interpreter, file search, Bing grounding, Azure Functions), manage threads and messages, configure vector stores, and handle streaming responses.'
Include common user-facing keyword variations like 'Azure agents', 'AI agent SDK', 'azure-ai-projects', 'azure-ai-agents' to improve trigger term coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Azure AI Foundry, Microsoft Agent Framework Python SDK) and a general action ('Build persistent agents'), but does not list multiple specific concrete actions like configuring tools, managing agent state, deploying endpoints, etc. | 2 / 3 |
Completeness | Describes what it does ('Build persistent agents on Azure AI Foundry') but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and since the 'what' is also thin, this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes relevant keywords like 'Azure AI Foundry', 'Microsoft Agent Framework', 'Python SDK', and 'persistent agents', but misses common user variations such as 'Azure agents', 'AI agent', 'foundry agent', 'azure-ai-agent', or mentioning specific capabilities users might ask about. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'Azure AI Foundry', 'Microsoft Agent Framework', and 'Python SDK' creates a very specific niche that is unlikely to conflict with other skills. This is clearly distinguishable from general coding, other cloud platforms, or other agent frameworks. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable skill with excellent executable code examples covering the full range of Azure AI Agent Framework capabilities. Its main weaknesses are redundancy between individual examples and the complete example section, and the lack of error handling/validation guidance for cloud service operations. The progressive disclosure structure is well done with clear references to supplementary files.
Suggestions
Remove or significantly trim the 'Complete Example' section since it mostly repeats patterns already demonstrated in individual sections—or replace it with a genuinely novel scenario that combines tools in a way not shown above.
Add error handling guidance and validation checkpoints (e.g., what to do when agent creation fails, how to verify the agent was created, how to handle credential/connection errors) to improve workflow clarity for these cloud-dependent operations.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient with good code examples, but there's significant redundancy—the complete example at the end largely repeats patterns already shown in the individual sections (basic agent, function tools, streaming, threads, structured outputs). The architecture diagram and conventions section add value, but the overall length could be reduced by ~30% without losing information. | 2 / 3 |
Actionability | Every section provides fully executable, copy-paste ready Python code with correct imports, async patterns, and concrete examples. The code covers installation, authentication, basic usage, function tools, hosted tools, streaming, threads, and structured outputs—all with specific, runnable examples. | 3 / 3 |
Workflow Clarity | The individual code examples are clear and well-sequenced (install → configure → authenticate → create agent → run), but there are no explicit validation checkpoints or error handling guidance. For operations involving persistent cloud agents and external service calls, missing error recovery patterns and validation steps is a notable gap. | 2 / 3 |
Progressive Disclosure | The skill has a clear overview structure with quick-start patterns inline and well-signaled one-level-deep references to detailed files (references/tools.md, references/mcp.md, references/threads.md, references/advanced.md). Content is appropriately split between the main skill and reference files. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
636b862
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.