Build content moderation applications using the Azure AI Content Safety SDK for Java.
57
48%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/azure-ai-contentsafety-java/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear and distinct technology niche (Azure AI Content Safety SDK for Java) which helps with distinctiveness, but it lacks specific concrete actions and completely omits a 'Use when...' clause. Adding explicit trigger conditions and listing specific capabilities (e.g., text analysis, image moderation, blocklist management) would significantly improve its effectiveness for skill selection.
Suggestions
Add a 'Use when...' clause with trigger terms like 'content moderation', 'harmful content', 'Azure Content Safety', 'text analysis for safety', 'image moderation Java'.
List specific concrete actions such as 'analyze text for harmful content, moderate images, manage custom blocklists, detect protected material' to improve specificity.
Include common user-facing variations like 'content filtering', 'safety API', 'harmful content detection', 'toxicity detection' to improve trigger term coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (content moderation) and the technology (Azure AI Content Safety SDK for Java), but doesn't list specific concrete actions like 'detect harmful text', 'classify images', 'manage blocklists', etc. | 2 / 3 |
Completeness | Describes what (build content moderation apps with Azure AI Content Safety SDK for Java) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'what' is also not very detailed, warranting a 1. | 1 / 3 |
Trigger Term Quality | Includes relevant keywords like 'content moderation', 'Azure AI Content Safety', 'SDK', and 'Java', but misses common variations users might say such as 'harmful content detection', 'text moderation', 'image moderation', 'blocklist', or 'safety API'. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'Azure AI Content Safety SDK' and 'Java' creates a very specific niche that is unlikely to conflict with other skills. This is a clearly distinct technology and language pairing. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides comprehensive, executable Java code examples for the Azure AI Content Safety SDK, making it highly actionable. However, it suffers from being a monolithic reference document with no progressive disclosure, includes some unnecessary explanatory content and boilerplate sections, and lacks clear multi-step workflow guidance with validation checkpoints for operations like blocklist setup and usage.
Suggestions
Split blocklist management operations into a separate BLOCKLIST.md reference file and link to it from the main skill, keeping only a quick-start example inline.
Add an explicit end-to-end workflow for the blocklist use case: create → add items → verify propagation (wait/retry) → analyze text with blocklist → validate results.
Remove the 'Trigger Phrases', generic 'When to Use', and 'Limitations' boilerplate sections, and trim the Key Concepts harm category descriptions — Claude already understands these concepts.
Add a validation step after blocklist item addition (e.g., list items to confirm they were added) before proceeding to use the blocklist in analysis.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient with executable code examples, but includes some unnecessary content: the 'Key Concepts' table explaining harm categories (Claude knows what hate/violence/sexual content means), the 'Trigger Phrases' and 'When to Use' boilerplate sections add no value, and the 'Best Practices' section has some filler. The Limitations section is generic boilerplate. | 2 / 3 |
Actionability | All code examples are fully executable Java with proper imports, concrete method calls, and realistic usage patterns. The examples cover the full API surface including text analysis, image analysis, blocklist CRUD operations, and error handling — all copy-paste ready. | 3 / 3 |
Workflow Clarity | The skill presents individual API operations clearly but lacks workflow sequencing for multi-step processes. For example, the blocklist workflow (create → add items → wait 5 min → analyze with blocklist) is never presented as a connected sequence with validation checkpoints. The '~5 minutes delay' is buried in best practices rather than being an explicit checkpoint in a workflow. | 2 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with all API operations inlined — over 200 lines of code examples that could be split into separate reference files (e.g., blocklist management in its own file). There are no references to external files or any attempt to organize content hierarchically beyond flat H2/H3 headings. | 1 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
76cbde3
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.