Azure AI Projects SDK for .NET. High-level client for Azure AI Foundry projects including agents, connections, datasets, deployments, evaluations, and indexes.
77
66%
Does it follow best practices?
Impact
100%
1.81xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-azure-ai-projects-dotnet/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is strong in specificity and distinctiveness, clearly identifying the SDK, platform, and concrete capabilities. However, it lacks an explicit 'Use when...' clause, which is critical for Claude to know when to select this skill. Adding natural trigger terms and user-facing language would improve selection accuracy.
Suggestions
Add a 'Use when...' clause, e.g., 'Use when the user needs to interact with Azure AI Foundry projects in .NET, including creating agents, managing deployments, or running evaluations.'
Include common user-facing trigger terms like 'C#', 'Azure.AI.Projects', 'NuGet package', or 'AI Foundry SDK' that users might naturally mention.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete capabilities: agents, connections, datasets, deployments, evaluations, and indexes. Also specifies the SDK name, platform (.NET), and that it's a high-level client for Azure AI Foundry projects. | 3 / 3 |
Completeness | Clearly answers 'what does this do' (high-level client for Azure AI Foundry projects with specific sub-capabilities), but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this at 2 per the rubric. | 2 / 3 |
Trigger Term Quality | Includes relevant keywords like 'Azure AI', '.NET', 'agents', 'deployments', 'evaluations', but misses common user variations like 'C#', 'NuGet', 'Azure.AI.Projects', or mentioning specific use cases users might describe naturally. | 2 / 3 |
Distinctiveness Conflict Risk | Very specific niche: Azure AI Projects SDK for .NET. The combination of Azure AI Foundry, .NET, and the specific feature list (agents, connections, datasets, deployments, evaluations, indexes) makes it clearly distinguishable from other skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid SDK reference skill with excellent actionability — every workflow has complete, executable C# code examples covering the full operation lifecycle. However, it's overly monolithic for its breadth (8 distinct workflows), lacks integrated validation/error recovery in multi-step processes, and includes some redundant content. Trimming unnecessary sections and splitting detailed workflows into referenced files would significantly improve it.
Suggestions
Integrate error handling and validation directly into multi-step workflows (especially the agent polling loop) rather than having a separate generic error handling section.
Split the 8 detailed workflow sections into a separate WORKFLOWS.md or individual files, keeping only the most common 1-2 workflows inline with links to the rest.
Remove the 'When to Use' section (adds no value) and the 'Related SDKs' table (duplicates installation section info).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly comprehensive but includes some unnecessary sections like 'When to Use' (which is a meaningless tautology), the 'Related SDKs' table that duplicates installation info, and the Best Practices section contains some advice Claude already knows (like using async for I/O). The reference tables and client hierarchy are efficient, but overall it could be tightened. | 2 / 3 |
Actionability | All code examples are fully executable C# with proper using statements, concrete method calls, and complete workflows from client creation through cleanup. Each workflow section provides copy-paste ready code covering the full lifecycle of operations. | 3 / 3 |
Workflow Clarity | The agent workflow includes polling and cleanup steps which is good, but there are no explicit validation checkpoints or error recovery loops in the multi-step workflows. The error handling section is separate and generic rather than integrated into workflows where failures are likely (e.g., after creating resources or polling for completion). | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear sections and a logical hierarchy, but it's quite long (~300 lines) with all content inline. The reference tables at the bottom and detailed code for 8 different workflows could benefit from being split into separate files with the SKILL.md serving as an overview with links to detailed workflow files. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
f1697b6
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.