Azure AI Projects SDK for .NET. High-level client for Azure AI Foundry projects including agents, connections, datasets, deployments, evaluations, and indexes.
Install with Tessl CLI
npx tessl i github:boisenoise/skills-collections --skill azure-ai-projects-dotnet73
Quality
60%
Does it follow best practices?
Impact
100%
1.81xAverage score across 3 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-azure-ai-projects-dotnet/SKILL.mdDiscovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies the technology domain and lists feature areas but reads more like a product tagline than actionable skill guidance. It lacks concrete actions (verbs) and completely omits trigger conditions, making it difficult for Claude to know when to select this skill over others in a large skill library.
Suggestions
Add a 'Use when...' clause specifying trigger scenarios, e.g., 'Use when the user asks about Azure AI Foundry projects, creating AI agents in .NET, or managing Azure AI deployments with C#'
Convert the noun list to concrete actions: 'Create and manage AI agents, configure project connections, deploy models, run evaluations' instead of just listing 'agents, connections, datasets...'
Include common user-facing terms like 'C#', 'dotnet', 'Azure OpenAI', or 'AI Foundry portal' that users would naturally mention
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Azure AI Projects SDK for .NET) and lists several capabilities (agents, connections, datasets, deployments, evaluations, indexes), but these are category names rather than concrete actions like 'create agents' or 'manage deployments'. | 2 / 3 |
Completeness | Describes what the SDK covers but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. The rubric caps completeness at 2 for missing this, and the 'what' is also weak (listing nouns not actions). | 1 / 3 |
Trigger Term Quality | Includes relevant technical terms like 'Azure AI Foundry', '.NET', 'agents', 'deployments' that users might mention, but missing common variations like 'C#', 'dotnet', or action-oriented phrases users would naturally say. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'Azure AI Projects SDK' and '.NET' provides some distinctiveness, but could overlap with other Azure-related skills or general .NET SDK skills without clearer boundaries. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
87%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a high-quality SDK reference skill with excellent actionability and conciseness. The code examples are complete and executable, covering all major client operations. The main weakness is the lack of explicit validation checkpoints in multi-step workflows, particularly around resource creation and cleanup operations where errors could leave orphaned resources.
Suggestions
Add explicit validation steps after agent/thread creation (e.g., check run.Status for failure states before proceeding)
Include error recovery guidance in the polling loop showing how to handle RunStatus.Failed or RunStatus.Cancelled states
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is lean and efficient, providing direct code examples without explaining basic concepts Claude already knows. Every section serves a purpose with no padding or unnecessary explanations. | 3 / 3 |
Actionability | All code examples are fully executable and copy-paste ready with proper imports, complete method calls, and realistic usage patterns. The workflows show complete end-to-end implementations. | 3 / 3 |
Workflow Clarity | Multi-step processes like agent creation include polling loops, but validation checkpoints are implicit rather than explicit. The cleanup steps are shown but there's no explicit error recovery or validation before proceeding to next steps. | 2 / 3 |
Progressive Disclosure | Content is well-organized with clear sections, reference tables for tools and types, and external links to detailed documentation. The structure allows quick scanning while providing depth where needed. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.