CtrlK
BlogDocsLog inGet started
Tessl Logo

azure-ai-contentsafety-py

Azure AI Content Safety SDK for Python. Use for detecting harmful content in text and images with multi-severity classification.

80

1.07x
Quality

70%

Does it follow best practices?

Impact

100%

1.07x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/antigravity-azure-ai-contentsafety-py/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

75%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is functional and clearly scoped to a specific SDK with an explicit 'Use for' clause, making it easy for Claude to know when to select it. However, it could benefit from listing more concrete actions (e.g., analyze text categories, configure severity thresholds, manage blocklists) and including more natural trigger terms users might use like 'content moderation' or 'toxicity detection'.

Suggestions

Add more specific concrete actions like 'analyze text for hate, violence, sexual, and self-harm categories, configure severity thresholds, manage custom blocklists'.

Include additional natural trigger terms users might say, such as 'content moderation', 'toxicity detection', 'content filtering', 'NSFW detection', or 'offensive content'.

DimensionReasoningScore

Specificity

Names the domain (Azure AI Content Safety SDK) and a key action (detecting harmful content with multi-severity classification), but doesn't list multiple specific concrete actions like configuring thresholds, analyzing categories, or handling blocklists.

2 / 3

Completeness

Clearly answers both 'what' (Azure AI Content Safety SDK for Python) and 'when' ('Use for detecting harmful content in text and images with multi-severity classification'), with an explicit 'Use for...' trigger clause.

3 / 3

Trigger Term Quality

Includes relevant keywords like 'Azure AI Content Safety', 'harmful content', 'text and images', and 'multi-severity classification', but misses common user variations like 'content moderation', 'toxicity detection', 'hate speech', 'NSFW', or 'content filtering'.

2 / 3

Distinctiveness Conflict Risk

Very specific niche targeting the Azure AI Content Safety SDK specifically, which is unlikely to conflict with other skills. The combination of Azure, content safety, and SDK makes it clearly distinguishable.

3 / 3

Total

10

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid API reference skill with excellent, executable code examples covering the main Azure AI Content Safety SDK features. Its main weaknesses are the lack of error handling/validation workflows for API operations, some unnecessary explanatory content (harm category descriptions, generic best practices), and a monolithic structure that could benefit from splitting reference material into separate files.

Suggestions

Add error handling examples for common failure modes (authentication errors, rate limits, invalid input) with a validate-and-retry pattern

Remove the Harm Categories description table and the 'When to Use' section — Claude already understands these concepts and the latter adds no information

Move the reference tables (Severity Scale, Client Types, Harm Categories) to a separate REFERENCE.md file and link to it from the main skill

DimensionReasoningScore

Conciseness

The content is mostly efficient with good code examples, but includes some unnecessary elements like the Harm Categories description table (Claude knows what hate speech and violence are), the 'When to Use' section is a meaningless tautology, and the Best Practices section contains generic advice that doesn't add much value.

2 / 3

Actionability

All code examples are fully executable, copy-paste ready with correct imports, proper client initialization, and realistic usage patterns. The examples cover authentication, text analysis, image analysis, blocklist management, and severity configuration with complete, runnable code.

3 / 3

Workflow Clarity

The skill presents individual API operations clearly but lacks workflow sequencing for multi-step processes like blocklist creation → adding items → analyzing with blocklist. There are no validation checkpoints or error handling patterns for API calls that could fail (e.g., invalid credentials, rate limiting, malformed content).

2 / 3

Progressive Disclosure

The content is reasonably well-structured with clear section headers, but it's a monolithic document (~180 lines) where the blocklist management section and reference tables could be split into separate files. No references to external documentation or supplementary files are provided.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
boisenoise/skills-collections
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.