Azure AI Content Safety SDK for Python. Use for detecting harmful content in text and images with multi-severity classification.
68
61%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-azure-ai-contentsafety-py/SKILL.mdQuality
Discovery
57%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear niche (Azure AI Content Safety SDK) and provides some useful context about its purpose, making it distinctive. However, it lacks comprehensive action listing, misses common user trigger terms like 'content moderation' or 'toxicity', and the 'Use for' clause is more of a capability restatement than explicit trigger guidance.
Suggestions
Expand trigger terms to include natural user phrases like 'content moderation', 'toxicity detection', 'NSFW filtering', 'hate speech detection', or 'content safety API'.
Add an explicit 'Use when...' clause with situational triggers, e.g., 'Use when the user needs to integrate Azure content moderation, detect toxic or harmful text/images, or configure content safety policies in Python projects'.
List more specific concrete actions such as 'analyze text for hate, violence, sexual, and self-harm categories, configure custom blocklists, interpret severity scores, and handle async content analysis'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Azure AI Content Safety SDK) and some actions ('detecting harmful content in text and images with multi-severity classification'), but doesn't list multiple specific concrete actions like configuring blocklists, analyzing severity levels, or handling specific content categories. | 2 / 3 |
Completeness | The 'what' is partially addressed (detecting harmful content with multi-severity classification) and 'Use for' provides a weak trigger clause, but it lacks explicit 'when' guidance such as 'Use when the user asks about content moderation, safety filtering, or harmful content detection in Azure projects'. | 2 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'Azure AI Content Safety', 'harmful content', 'text and images', and 'multi-severity classification', but misses common user variations like 'content moderation', 'toxicity detection', 'NSFW', 'hate speech', or 'content filtering'. | 2 / 3 |
Distinctiveness Conflict Risk | The description is clearly scoped to Azure AI Content Safety SDK for Python, which is a very specific niche. It's unlikely to conflict with other skills due to the explicit mention of the SDK name and platform. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid API reference skill with excellent executable code examples covering the main Azure AI Content Safety SDK operations. Its main weaknesses are the lack of error handling/validation patterns for a safety-critical SDK, some unnecessary boilerplate sections (When to Use, Limitations, generic Best Practices), and a monolithic structure that could benefit from splitting reference material into separate files.
Suggestions
Add error handling examples (try/except for HttpResponseError, authentication failures, rate limiting) since content safety operations are critical and failures need graceful handling.
Remove the boilerplate 'When to Use' and 'Limitations' sections and trim the 'Best Practices' to only non-obvious, SDK-specific guidance.
Move the reference tables (Harm Categories, Severity Scale, Client Types) to a separate REFERENCE.md file and link to it from the main skill.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient with executable code examples, but includes some unnecessary elements: the Harm Categories table descriptions are things Claude already knows, the 'Best Practices' section is generic advice, and the 'When to Use' and 'Limitations' sections are boilerplate filler that add no value. | 2 / 3 |
Actionability | All code examples are fully executable, copy-paste ready with correct imports, and cover the main SDK operations (text analysis, image analysis, blocklist management, severity configuration). The examples use real SDK classes and methods with proper patterns. | 3 / 3 |
Workflow Clarity | The skill presents individual operations clearly but lacks error handling patterns (e.g., what happens when API calls fail, rate limiting, invalid credentials). For a content safety SDK where misclassification has consequences, there's no validation/verification guidance for checking results or handling edge cases. | 2 / 3 |
Progressive Disclosure | The content is a long monolithic file (~180 lines) with good section headers but no references to external files. The blocklist management section and reference tables could be split out, and the inline tables add bulk that could be referenced separately. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
431bfad
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.