Azure AI Projects SDK for .NET. High-level client for Azure AI Foundry projects including agents, connections, datasets, deployments, evaluations, and indexes.
72
66%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-azure-ai-projects-dotnet/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is strong in specificity and distinctiveness, clearly identifying the SDK, platform, and concrete capabilities. Its main weakness is the lack of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill. The trigger terms are adequate but could benefit from common user-facing variations like 'C#' or package names.
Suggestions
Add a 'Use when...' clause such as 'Use when the user is working with Azure AI Foundry projects in .NET/C#, or asks about Azure AI agents, connections, datasets, deployments, evaluations, or indexes.'
Include common user-facing trigger variations like 'C#', 'Azure.AI.Projects', 'NuGet package', or 'dotnet' to improve keyword coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete capabilities: agents, connections, datasets, deployments, evaluations, and indexes. Also specifies the SDK name, platform (.NET), and that it's a high-level client for Azure AI Foundry projects. | 3 / 3 |
Completeness | Clearly answers 'what does this do' (high-level client for Azure AI Foundry projects with specific sub-capabilities), but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this at 2 per the rubric. | 2 / 3 |
Trigger Term Quality | Includes relevant keywords like 'Azure AI', '.NET', 'agents', 'deployments', 'evaluations', but misses common user variations like 'C#', 'NuGet', 'Azure.AI.Projects', or file extensions. The terms are somewhat technical but appropriate for the domain. | 2 / 3 |
Distinctiveness Conflict Risk | Very specific niche: Azure AI Foundry projects SDK for .NET. The combination of 'Azure AI Projects', '.NET', and the specific sub-domains (agents, connections, datasets, etc.) makes it clearly distinguishable from other skills and unlikely to conflict. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid SDK reference skill with excellent actionability — nearly every section has executable C# code examples covering the full breadth of the Azure.AI.Projects SDK. The main weaknesses are moderate verbosity (boilerplate sections, some obvious best practices) and missing validation checkpoints in several workflows, particularly for dataset uploads, index creation, and evaluation runs. The document would benefit from splitting detailed reference content into separate files given its length.
Suggestions
Add validation/verification steps after dataset uploads, index creation, and evaluation runs (e.g., check status, confirm resource exists) to improve workflow reliability.
Remove the boilerplate 'When to Use' and 'Limitations' sections which add no SDK-specific value, and trim obvious best practices like 'use async for I/O'.
Consider splitting the Key Types Reference table and Agent Tools table into a separate REFERENCE.md file, keeping SKILL.md as a concise overview with links to details.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly comprehensive but includes some unnecessary sections like 'When to Use' and 'Limitations' which are boilerplate and add no value. The Best Practices section contains some obvious advice (use async for I/O). The client hierarchy diagram and reference tables are efficient, but overall the document is longer than needed for an SDK reference skill. | 2 / 3 |
Actionability | Every workflow section contains fully executable, copy-paste ready C# code with proper using statements, async patterns, and complete examples including cleanup. The installation commands, environment variables, and authentication setup are all concrete and specific. | 3 / 3 |
Workflow Clarity | The agent workflow (Section 1) includes a clear polling loop and cleanup steps, which is good. However, most other workflows (datasets, indexes, evaluations) lack validation checkpoints — for example, after uploading a dataset or creating an index, there's no verification step to confirm success. The evaluation workflow doesn't check for completion status before reading results. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear sections and a logical hierarchy, but it's essentially a monolithic document. The Reference Links section points to external resources, but there's no splitting of detailed content (like the full agent tools table or key types reference) into separate files. For a skill this long (~300 lines), some content could be offloaded. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
431bfad
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.