Azure AI Projects SDK for Java. High-level SDK for Azure AI Foundry project management including connections, datasets, indexes, and evaluations.
57
48%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-azure-ai-projects-java/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies the domain and lists high-level capability areas but lacks concrete action verbs and, critically, has no 'Use when...' clause to guide skill selection. The terms used are somewhat specific to Azure AI but read more like a library summary than actionable skill-selection guidance.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user needs to work with Azure AI Foundry projects in Java, including creating or managing connections, datasets, indexes, or running evaluations.'
Replace category nouns with concrete action phrases, e.g., 'Create and manage Azure AI Foundry project connections, upload and query datasets, build search indexes, and run model evaluations.'
Include natural trigger terms users might say, such as 'azure-ai-projects', 'AI Foundry Java client', 'AzureAIProjectClient', or 'com.azure:azure-ai-projects'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Azure AI Projects SDK for Java) and lists some actions/areas (project management, connections, datasets, indexes, evaluations), but these are more like categories than concrete actions (e.g., 'create connections', 'manage datasets', 'run evaluations' would be more specific). | 2 / 3 |
Completeness | Describes what the skill covers (Azure AI Projects SDK capabilities) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and since the 'what' is also only moderately detailed, this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes relevant keywords like 'Azure AI', 'SDK', 'Java', 'Azure AI Foundry', 'connections', 'datasets', 'indexes', 'evaluations', but misses common variations users might say such as 'azure-ai-projects', Maven artifact names, or phrases like 'AI project client'. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'Azure AI Projects SDK' and 'Java' provides reasonable distinctiveness, but could overlap with other Azure AI skills or general Java SDK skills. The mention of 'Azure AI Foundry' helps narrow it, but without explicit trigger boundaries it could still conflict with related Azure AI skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid SDK reference skill with good executable code examples and clear authentication setup. Its main weaknesses are the lack of connected workflow sequences showing how operations chain together, some boilerplate/generic content that wastes tokens, and best practices that state things Claude already knows. The skill would benefit from trimming generic advice and adding a workflow showing a typical end-to-end project setup.
Suggestions
Remove the generic 'When to Use' and 'Limitations' boilerplate sections, and trim 'Best Practices' to only non-obvious SDK-specific guidance.
Add a brief end-to-end workflow showing a common sequence (e.g., authenticate → create dataset → create index → run evaluation) with validation checkpoints between steps.
Consider splitting detailed operations (evaluations, datasets) into separate referenced files to improve progressive disclosure.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Generally efficient with good code examples, but includes some unnecessary content like the generic 'When to Use' and 'Limitations' boilerplate sections that add no value. The 'Best Practices' section contains advice Claude already knows (e.g., 'use environment variables', 'handle pagination'). | 2 / 3 |
Actionability | Provides fully executable Java code examples for authentication, listing connections, listing indexes, creating indexes, and error handling. Code is copy-paste ready with proper imports and realistic usage patterns. | 3 / 3 |
Workflow Clarity | The client hierarchy and sub-client creation pattern is clear, but there's no explicit workflow sequence for common multi-step operations (e.g., authenticate → create dataset → create index → run evaluation). The operations are presented as isolated examples without showing how they connect. | 2 / 3 |
Progressive Disclosure | Reference links to external docs, samples, and source code are provided, which is good. However, all content is inline in a single file with no signaling toward separate detailed guides for complex topics like evaluations or dataset management. The content is moderately long and could benefit from splitting advanced topics. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
431bfad
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.