CtrlK
BlogDocsLog inGet started
Tessl Logo

azure-ai-contentsafety-ts

Analyze text and images for harmful content using Azure AI Content Safety (@azure-rest/ai-content-safety). Use when moderating user-generated content, detecting hate speech, violence, sexual conten...

Install with Tessl CLI

npx tessl i github:sickn33/antigravity-awesome-skills --skill azure-ai-contentsafety-ts
What are skills?

91

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly identifies the specific Azure service, lists concrete moderation capabilities, and provides explicit trigger scenarios with natural user language. The description uses proper third-person voice and includes both technical identifiers (SDK package name) and user-friendly terms. The only minor issue is the truncation indicated by '...' but the visible content is well-structured.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Analyze text and images for harmful content', 'moderating user-generated content', 'detecting hate speech, violence, sexual content'. Includes the specific SDK package name (@azure-rest/ai-content-safety).

3 / 3

Completeness

Clearly answers both what ('Analyze text and images for harmful content using Azure AI Content Safety') and when ('Use when moderating user-generated content, detecting hate speech, violence, sexual content...'). Has explicit 'Use when' clause with trigger scenarios.

3 / 3

Trigger Term Quality

Includes natural keywords users would say: 'harmful content', 'moderating', 'user-generated content', 'hate speech', 'violence', 'sexual content', 'Azure AI Content Safety'. These are terms users would naturally use when needing content moderation.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with clear niche: Azure-specific content safety/moderation. The combination of Azure AI Content Safety, specific harmful content types, and moderation context makes it unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

79%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-crafted SDK reference skill with excellent actionability and conciseness. All code examples are executable and properly typed. The main weaknesses are the lack of explicit workflow validation steps for multi-step operations and the monolithic structure that could benefit from progressive disclosure to separate reference material.

Suggestions

Add explicit workflow sequences for common multi-step operations (e.g., 'Setting up content moderation: 1. Create blocklist 2. Verify creation 3. Add items 4. Verify items 5. Test with analyze')

Split the API Endpoints table and Key Types section into a separate REFERENCE.md file with a clear link from the main skill

Add validation/verification steps after blocklist operations (e.g., 'After creating, verify with GET /text/blocklists/{name}')

DimensionReasoningScore

Conciseness

The content is lean and efficient, providing executable code examples without explaining basic concepts Claude already knows. Every section serves a purpose with minimal padding.

3 / 3

Actionability

All code examples are fully executable and copy-paste ready with proper imports, error handling patterns, and complete function signatures. The helper function is production-ready.

3 / 3

Workflow Clarity

While individual operations are clear, the skill lacks explicit validation checkpoints for multi-step workflows like blocklist management (create -> add items -> use). No feedback loops for error recovery are provided.

2 / 3

Progressive Disclosure

Content is well-organized with clear sections, but it's a monolithic document (~250 lines) that could benefit from splitting API reference tables and the helper function into separate files with clear navigation links.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.