CtrlK
BlogDocsLog inGet started
Tessl Logo

azure-ai-projects-ts

High-level SDK for Azure AI Foundry projects with agents, connections, deployments, and evaluations.

57

Quality

48%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/azure-ai-projects-ts/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies the domain (Azure AI Foundry) and lists broad capability areas but lacks concrete action verbs, explicit trigger guidance, and a 'Use when...' clause. It reads more like a tagline than a functional description that would help Claude reliably select this skill from a large pool of options.

Suggestions

Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user asks about Azure AI Foundry projects, creating AI agents, managing Azure AI connections, deploying models, or running evaluations.'

Replace the high-level category nouns with specific action verbs, e.g., 'Creates and manages AI agents, configures project connections, deploys models, and runs evaluation pipelines in Azure AI Foundry.'

Include natural keyword variations users might say, such as 'Azure AI project', 'AI Foundry SDK', 'azure-ai-projects', 'model evaluation', or 'agent orchestration'.

DimensionReasoningScore

Specificity

Names the domain (Azure AI Foundry) and lists some capabilities (agents, connections, deployments, evaluations), but these are high-level categories rather than concrete actions. No verbs describing what specific operations can be performed.

2 / 3

Completeness

Describes what at a high level (SDK for Azure AI Foundry projects) but completely lacks any 'Use when...' clause or explicit trigger guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also weak, so this scores a 1.

1 / 3

Trigger Term Quality

Includes relevant keywords like 'Azure AI Foundry', 'agents', 'deployments', and 'evaluations' that users might mention, but misses common variations like 'Azure AI', 'AI project', 'model deployment', or SDK-specific terms users might naturally use.

2 / 3

Distinctiveness Conflict Risk

The mention of 'Azure AI Foundry' provides some distinctiveness, but 'agents', 'deployments', and 'evaluations' are generic terms that could overlap with other Azure or cloud-related skills. Could conflict with general Azure SDK or AI deployment skills.

2 / 3

Total

7

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid API reference skill with excellent actionability—nearly every operation has executable TypeScript code. However, it's somewhat long and monolithic, with the agent tools section alone taking significant space that could be offloaded to a reference file. The skill lacks validation/error handling guidance for multi-step operations like agent creation and dataset uploads.

Suggestions

Add validation checkpoints for multi-step workflows (e.g., verify dataset upload succeeded before proceeding, check agent creation response for errors)

Extract the extensive agent tools examples into a separate AGENT_TOOLS.md reference file, keeping only one or two examples inline

Remove the tautological 'When to Use' section and trim obvious best practices like 'don't hardcode credentials'

DimensionReasoningScore

Conciseness

Generally efficient with good code examples, but includes some unnecessary content like the 'When to Use' section which is a tautology, and the 'Best Practices' section contains some obvious advice (e.g., 'don't hardcode' credentials). The operation groups table is useful but some sections could be tighter.

2 / 3

Actionability

Provides fully executable, copy-paste ready TypeScript code for every operation group—authentication, agents with multiple tool types, connections, deployments, datasets, and indexes. Includes concrete import statements, environment variables, and specific API calls.

3 / 3

Workflow Clarity

The 'Run Agent' section shows a clear multi-step workflow (create conversation → generate response → cleanup), but there are no validation checkpoints or error handling guidance. For operations like dataset uploads and index creation, there's no verification step to confirm success or handle failures.

2 / 3

Progressive Disclosure

Content is well-structured with clear headers and sections, but it's a monolithic file with no references to external documentation for advanced topics. The extensive agent tools section (6 different tool types) could be split into a separate reference file, keeping SKILL.md as a concise overview.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
sickn33/antigravity-awesome-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.