CtrlK
BlogDocsLog inGet started
Tessl Logo

azure-ai-contentsafety-ts

Analyze text and images for harmful content with customizable blocklists.

57

Quality

48%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/antigravity-azure-ai-contentsafety-ts/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description conveys the core purpose of content moderation with blocklists but is too brief and lacks explicit trigger guidance. It would benefit from a 'Use when...' clause and more specific capability details to help Claude distinguish it from general text/image analysis skills.

Suggestions

Add a 'Use when...' clause with trigger terms like 'content moderation', 'filter harmful content', 'NSFW detection', 'profanity filter', or 'content safety'.

List more specific concrete actions such as 'flag toxic language, detect NSFW images, categorize content violations, apply custom blocklists and allowlists'.

Include common user-facing terms like 'moderation', 'content safety', 'toxic', 'offensive', 'NSFW' to improve trigger term coverage.

DimensionReasoningScore

Specificity

Names the domain (content moderation) and two actions (analyze text/images, customizable blocklists), but doesn't list multiple specific concrete actions like flagging, filtering, reporting, or categorizing harmful content types.

2 / 3

Completeness

Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and since the 'when' is entirely absent, this scores a 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords like 'harmful content', 'blocklists', 'text', and 'images', but misses common user terms like 'moderation', 'content safety', 'NSFW', 'filter', 'profanity', or 'toxic content'.

2 / 3

Distinctiveness Conflict Risk

The mention of 'harmful content' and 'blocklists' provides some specificity to content moderation, but 'analyze text and images' is broad enough to overlap with general text analysis or image analysis skills.

2 / 3

Total

7

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid API reference skill with excellent actionability — every operation has complete, executable TypeScript examples with proper imports and error handling. However, it's somewhat verbose for a skill file, including explanatory tables and generic best practices that don't add value for Claude. The content would benefit from tighter organization with progressive disclosure to external files for reference material.

Suggestions

Remove the boilerplate 'When to Use' and 'Limitations' sections, the generic 'Best Practices' list, and the Harm Categories description column — these don't add actionable information Claude doesn't already know.

Move the API Endpoints table, Key Types section, and Severity Levels reference into a separate REFERENCE.md file linked from the main skill.

Add a brief workflow section showing the recommended sequence for setting up content moderation end-to-end (create client → optionally create blocklist → add items → analyze content), with explicit verification steps between blocklist creation and usage.

DimensionReasoningScore

Conciseness

The content is mostly efficient with good code examples, but includes some unnecessary sections like the 'Best Practices' tips that are generic advice Claude already knows (e.g., 'log moderation decisions', 'handle edge cases'), the boilerplate 'When to Use' and 'Limitations' sections add no value, and the Harm Categories description table explains concepts that are self-evident from the API terms.

2 / 3

Actionability

All code examples are fully executable TypeScript with proper imports, concrete API paths, and real parameter structures. The examples cover authentication, text analysis, image analysis, blocklist CRUD operations, and even a complete moderation helper function — all copy-paste ready.

3 / 3

Workflow Clarity

Individual operations are clear, but there's no explicit workflow sequencing for multi-step processes like setting up blocklists then using them for moderation. The error handling pattern (isUnexpected check) is consistently shown but there are no validation checkpoints or feedback loops for operations like blocklist management where you'd want to verify creation before adding items.

2 / 3

Progressive Disclosure

The content is well-structured with clear headers and logical grouping, but it's a monolithic document (~200 lines) that could benefit from splitting the blocklist management and the moderation helper into separate referenced files. The API endpoints table and key types section could also be external references to keep the main skill lean.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
boisenoise/skills-collections
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.