Azure AI Content Safety SDK for Python. Use for detecting harmful content in text and images with multi-severity classification.
Install with Tessl CLI
npx tessl i github:boisenoise/skills-collections --skill azure-ai-contentsafety-py79
Quality
68%
Does it follow best practices?
Impact
100%
1.07xAverage score across 3 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-azure-ai-contentsafety-py/SKILL.mdDiscovery
57%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description adequately identifies the specific SDK and its core purpose but lacks comprehensive action verbs and natural trigger terms users would employ. It has good distinctiveness due to the specific Azure platform reference, but would benefit from expanded trigger terms and more concrete capability examples.
Suggestions
Add more natural trigger terms users would say, such as 'content moderation', 'NSFW detection', 'toxicity', 'inappropriate content', or 'content filtering'.
Expand the 'Use for...' clause to include explicit scenarios like 'Use when moderating user-generated content, checking uploads for policy violations, or implementing content safety pipelines'.
List more specific concrete actions such as 'analyze text for hate speech, detect violent or sexual imagery, return severity scores for content categories'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Azure AI Content Safety SDK) and mentions 'detecting harmful content' and 'multi-severity classification', but doesn't list specific concrete actions like 'analyze text for hate speech, detect violent imagery, classify content severity levels'. | 2 / 3 |
Completeness | Has a 'Use for...' clause which addresses when to use it, but the trigger guidance is limited. It answers what (detecting harmful content) and partially when (for harmful content detection), but lacks explicit trigger terms like 'Use when the user mentions content moderation, safety checks, or harmful content filtering'. | 2 / 3 |
Trigger Term Quality | Includes some relevant terms like 'harmful content', 'text and images', 'content safety', but misses common variations users might say like 'moderation', 'inappropriate content', 'NSFW', 'content filtering', or 'toxicity detection'. | 2 / 3 |
Distinctiveness Conflict Risk | Clearly specific to Azure AI Content Safety SDK with distinct focus on harmful content detection and multi-severity classification. Unlikely to conflict with other skills due to the specific platform (Azure) and narrow use case. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
79%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted SDK reference skill with excellent conciseness and actionability. The code examples are complete and immediately usable. The main weaknesses are the lack of explicit workflow sequencing with validation steps and the monolithic structure that could benefit from splitting advanced topics into separate files.
Suggestions
Add a workflow section showing the complete flow: authenticate -> analyze -> handle results -> take action, with explicit validation/error handling checkpoints
Split blocklist management into a separate BLOCKLIST.md file and reference it from the main skill
Add error handling examples showing how to catch and respond to API errors or rate limits
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, providing executable code examples without explaining basic concepts Claude already knows. Every section serves a purpose with no padding or unnecessary context. | 3 / 3 |
Actionability | All code examples are complete, executable, and copy-paste ready. Includes proper imports, authentication patterns, and real API usage patterns with concrete method calls and response handling. | 3 / 3 |
Workflow Clarity | The skill presents clear individual operations but lacks explicit workflow sequencing for multi-step processes like blocklist creation -> adding items -> using in analysis. No validation checkpoints or error handling patterns are shown. | 2 / 3 |
Progressive Disclosure | Content is well-organized with clear sections and tables, but everything is inline in one file. For a skill of this length (~150 lines), the API reference tables and blocklist management could be split into separate files with clear navigation. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.