Build content moderation applications using the Azure AI Content Safety SDK for Java.
61
52%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-azure-ai-contentsafety-java/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear and distinct technology niche (Azure AI Content Safety SDK for Java) but is too terse. It lacks specific concrete actions the skill enables and completely omits a 'Use when...' clause, making it harder for Claude to know when to select this skill from a large pool.
Suggestions
Add a 'Use when...' clause such as 'Use when the user needs to build Java applications for content moderation, text analysis for harmful content, image safety classification, or working with the Azure AI Content Safety API.'
List specific concrete actions like 'analyze text for harmful content, classify image safety levels, manage custom blocklists, detect protected material' to improve specificity.
Include natural trigger term variations such as 'harmful content detection', 'text moderation', 'image moderation', 'content filtering', 'safety API' to improve keyword coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (content moderation) and the technology (Azure AI Content Safety SDK for Java), but doesn't list specific concrete actions like 'detect harmful text', 'classify images', 'manage blocklists', etc. | 2 / 3 |
Completeness | Describes what (build content moderation apps with Azure AI Content Safety SDK for Java) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'what' is also not very detailed, warranting a 1. | 1 / 3 |
Trigger Term Quality | Includes relevant keywords like 'content moderation', 'Azure AI Content Safety', 'SDK', and 'Java', but misses common variations users might say such as 'harmful content detection', 'text moderation', 'image moderation', 'safety API', or 'content filtering'. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'Azure AI Content Safety SDK' and 'Java' creates a very specific niche that is unlikely to conflict with other skills. This is a clearly distinct technology and language combination. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid API reference skill with excellent actionability—every operation has complete, executable Java code with proper imports. However, it's somewhat verbose for a skill file (could trim the Key Concepts table, Trigger Phrases, and When to Use sections), and it lacks workflow sequencing that would connect individual operations into validated multi-step processes. The blocklist management section in particular would benefit from being presented as a workflow with explicit validation steps.
Suggestions
Remove the 'Trigger Phrases', 'When to Use', and 'Key Concepts' table sections—these waste tokens on content Claude already knows or that belongs in frontmatter.
Add a connected workflow for blocklist operations showing the full create → add items → wait for propagation → verify → analyze sequence with explicit validation checkpoints.
Consider splitting blocklist management into a separate BLOCKLIST.md file referenced from the main skill to improve progressive disclosure.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient with executable code examples, but includes some unnecessary content like the 'Key Concepts' table explaining harm categories (Claude knows what hate speech and violence are), the 'Trigger Phrases' and 'When to Use' sections add no value, and the 'Best Practices' section has some filler. The Harm Categories table describes rather than instructs. | 2 / 3 |
Actionability | All code examples are fully executable Java with proper imports, concrete method calls, and copy-paste ready patterns. The examples cover the full API surface including text analysis, image analysis, blocklist CRUD operations, and error handling with specific exception types and status codes. | 3 / 3 |
Workflow Clarity | The skill presents individual API operations clearly but lacks workflow sequencing for multi-step processes. For example, the blocklist workflow (create → add items → wait 5 minutes → analyze with blocklist) is not presented as a connected sequence with validation checkpoints. The 5-minute delay is mentioned only as a best practice bullet rather than as a critical validation step in the workflow. | 2 / 3 |
Progressive Disclosure | The content is well-organized with clear section headers, but it's a monolithic document (~200 lines of code examples) that could benefit from splitting blocklist management into a separate reference file. There are no references to external files for advanced topics, though the content is structured with logical groupings. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
f1697b6
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.