Build AI applications on Microsoft Foundry using the azure-ai-projects SDK.
61
52%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/azure-ai-projects-py/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a specific platform and SDK but is too terse and lacks concrete actions and explicit trigger guidance. It would benefit greatly from listing specific capabilities (e.g., creating agents, managing connections, deploying models) and adding a 'Use when...' clause with natural trigger terms users would say.
Suggestions
Add a 'Use when...' clause with trigger terms like 'Azure AI Foundry', 'azure-ai-projects', 'AI agents on Azure', 'Azure AI Studio', 'Foundry SDK'.
List specific concrete actions the skill covers, such as 'create AI agents, manage connections, configure deployments, set up evaluations, work with Azure OpenAI models'.
Include common user-facing synonyms and variations like 'Azure AI Foundry', 'Azure AI Studio', 'azure.ai.projects' to improve trigger term coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (AI applications on Microsoft Foundry) and mentions a specific SDK (azure-ai-projects), but does not list concrete actions like 'create agents', 'deploy models', 'manage datasets', etc. | 2 / 3 |
Completeness | Provides a brief 'what' (build AI applications using the SDK) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'what' is also weak, so this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes relevant keywords like 'Microsoft Foundry', 'azure-ai-projects SDK', and 'AI applications', but misses common variations users might say such as 'Azure AI Foundry', 'Azure AI Studio', 'foundry agent', or related terms. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'Microsoft Foundry' and 'azure-ai-projects SDK' provides some distinctiveness, but 'Build AI applications' is broad enough to potentially overlap with other Azure or AI development skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured SDK skill with strong actionability through executable code examples and excellent progressive disclosure via clearly organized reference files. The main weaknesses are missing error handling/validation checkpoints in multi-step workflows (especially the thread/message flow and agent creation) and some unnecessary boilerplate content that could be trimmed.
Suggestions
Add error handling and validation to the Thread and Message Flow section — check for failed/expired run statuses and show recovery patterns (e.g., `if run.status == 'failed': print(run.last_error)`)
Remove the generic 'When to Use' and 'Limitations' boilerplate sections at the bottom, which add no SDK-specific value and waste tokens
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient with good code examples, but includes some unnecessary sections like the boilerplate 'When to Use' and 'Limitations' sections that add no value, and the SDK Comparison table explains things Claude could infer. The Best Practices section is somewhat generic. However, most content is lean and code-focused. | 2 / 3 |
Actionability | The skill provides fully executable, copy-paste ready code examples throughout — authentication, agent creation, thread/message flow, connections, deployments, evaluation, async patterns, and memory stores. Environment variables and installation commands are concrete and specific. | 3 / 3 |
Workflow Clarity | The Thread and Message Flow section provides a clear numbered sequence, but lacks validation checkpoints — there's no error handling for failed runs, no guidance on what to do if run.status is not 'completed', and no cleanup/rollback steps for agent creation failures. For operations involving resource creation and external API calls, this is a gap. | 2 / 3 |
Progressive Disclosure | Excellent progressive disclosure with a clear overview in the main file and well-signaled one-level-deep references to 11 specific reference files covering agents, tools, evaluation, connections, deployments, datasets, async patterns, and API reference. The main file provides enough context to get started while pointing to detailed materials. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
76cbde3
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.