CtrlK
BlogDocsLog inGet started
Tessl Logo

azure-ai-contentsafety-ts

Analyze text and images for harmful content with customizable blocklists.

57

Quality

48%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/antigravity-azure-ai-contentsafety-ts/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description conveys the core purpose of content moderation with blocklists but lacks a 'Use when...' clause, which is critical for skill selection. It would benefit from more specific action verbs, explicit trigger terms users would naturally use, and clearer guidance on when this skill should be activated.

Suggestions

Add a 'Use when...' clause with trigger terms like 'content moderation', 'filter harmful content', 'NSFW detection', 'profanity filter', or 'content safety'.

List more specific concrete actions such as 'flag toxic language, detect NSFW images, categorize content violations, apply custom blocklists and allowlists'.

Include common user-facing synonyms and file/content types to improve trigger term coverage, e.g., 'moderation', 'toxic', 'offensive', 'inappropriate content'.

DimensionReasoningScore

Specificity

Names the domain (content moderation) and two actions (analyze text/images, customizable blocklists), but doesn't list multiple specific concrete actions like flagging, filtering, reporting, or categorizing harmful content types.

2 / 3

Completeness

Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'what' is also only moderately detailed, warranting a 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords like 'harmful content', 'blocklists', 'text', and 'images', but misses common user terms like 'moderation', 'content safety', 'NSFW', 'filter', 'profanity', or 'toxic content'.

2 / 3

Distinctiveness Conflict Risk

The combination of 'harmful content' analysis with 'customizable blocklists' is somewhat distinctive, but 'analyze text and images' is broad enough to overlap with general text/image analysis skills.

2 / 3

Total

7

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid API reference skill with excellent actionability—every code example is executable and well-typed. However, it's somewhat verbose for a skill file, including explanatory tables and generic best practices that don't add value for Claude. The content would benefit from splitting into a concise overview with references to detailed sub-files for blocklist management and the helper utility.

Suggestions

Remove the 'When to Use' placeholder section and trim the 'Best Practices' to only non-obvious, SDK-specific guidance (e.g., keep the isUnexpected() tip, drop 'Handle edge cases').

Extract the Blocklist Management section and Content Moderation Helper into separate referenced files (e.g., BLOCKLISTS.md, MODERATION_HELPER.md) to keep SKILL.md as a concise overview.

Add an explicit sequential workflow for the blocklist setup process: create → verify creation → add items → verify items → use in analysis.

DimensionReasoningScore

Conciseness

The skill is mostly efficient with good code examples, but includes some unnecessary content like the harm categories description table (Claude knows what these mean), the 'Best Practices' section with generic advice (e.g., 'Handle edge cases'), and the meaningless 'When to Use' section. The content could be tightened by ~20-30%.

2 / 3

Actionability

All code examples are fully executable TypeScript with proper imports, concrete API paths, and real parameter structures. The authentication setup, text/image analysis, blocklist management, and the moderation helper function are all copy-paste ready with correct types.

3 / 3

Workflow Clarity

Individual operations are clear, but the blocklist management workflow (create → add items → analyze with blocklist) lacks explicit sequencing as a connected workflow. Error handling is present via isUnexpected() checks, but there's no validation/verification guidance for blocklist operations (e.g., confirming items were added before analyzing).

2 / 3

Progressive Disclosure

The content is well-structured with clear section headers, but it's a long monolithic file (~230 lines) where the blocklist management section, API endpoints table, and the moderation helper could be split into separate reference files. No external file references are used despite the content length warranting it.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
boisenoise/skills-collections
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.