Analyze text and images for harmful content using Azure AI Content Safety (@azure-rest/ai-content-safety). Use when moderating user-generated content, detecting hate speech, violence, sexual conten...
Install with Tessl CLI
npx tessl i github:boisenoise/skills-collections --skill azure-ai-contentsafety-ts92
Quality
89%
Does it follow best practices?
Impact
100%
1.31xAverage score across 3 eval scenarios
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that follows best practices. It clearly specifies the capability (content safety analysis), the technology stack (Azure AI Content Safety SDK), and explicit trigger conditions with natural user terms like 'hate speech', 'violence', and 'moderating'. The description appears truncated but contains all essential elements within the visible portion.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Analyze text and images for harmful content', 'moderating user-generated content', 'detecting hate speech, violence, sexual content'. Includes the specific SDK package name (@azure-rest/ai-content-safety). | 3 / 3 |
Completeness | Clearly answers both what (analyze text and images for harmful content using Azure AI Content Safety) AND when (Use when moderating user-generated content, detecting hate speech, violence, sexual content). Has explicit 'Use when...' clause with trigger scenarios. | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would say: 'harmful content', 'moderating', 'user-generated content', 'hate speech', 'violence', 'sexual content', 'Azure AI Content Safety'. These are terms users would naturally use when needing content moderation. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused on Azure AI Content Safety specifically for content moderation. The combination of Azure-specific tooling, content safety focus, and specific harmful content categories makes it highly distinctive and unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
79%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a high-quality SDK reference skill with excellent actionability and conciseness. The code examples are complete, properly typed, and immediately usable. The main weaknesses are the lack of explicit workflow sequences for multi-step operations (like setting up a complete moderation pipeline) and the monolithic structure that could benefit from progressive disclosure to separate reference material.
Suggestions
Add a 'Quick Start Workflow' section showing the complete sequence: authenticate → analyze → handle results → log decisions, with explicit validation checkpoints
Split the API Endpoints table and Key Types section into a separate REFERENCE.md file, keeping SKILL.md focused on common usage patterns
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, providing executable code examples without explaining basic concepts Claude already knows. Every section serves a purpose with minimal padding. | 3 / 3 |
Actionability | All code examples are fully executable and copy-paste ready with proper imports, error handling patterns, and complete function signatures. The helper function is production-ready. | 3 / 3 |
Workflow Clarity | While individual operations are clear, there's no explicit workflow sequence for common use cases like 'set up moderation pipeline' or validation checkpoints. The blocklist management section lists operations but doesn't sequence them with verification steps. | 2 / 3 |
Progressive Disclosure | Content is well-organized with clear sections, but it's a monolithic document (~250 lines) that could benefit from splitting API reference tables and the helper function into separate files with clear navigation links. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.