Implement content policy compliance for Kling AI prompts and outputs. Use when filtering user prompts or handling moderation. Trigger with phrases like 'klingai content policy', 'kling ai moderation', 'safe video generation', 'klingai content filter'.
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/klingai-pack/skills/klingai-content-policy/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid description with clear 'when' triggers and a distinct niche around Kling AI content policy. Its main weakness is that the 'what' could be more specific about the concrete actions performed (e.g., blocking prohibited categories, returning safe alternatives, logging violations). The trigger terms are well-chosen and the description is unlikely to conflict with other skills.
Suggestions
Add more specific concrete actions, e.g., 'Blocks prohibited content categories, flags unsafe prompts, returns policy violation explanations, and suggests safe alternatives for Kling AI video generation.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Kling AI content policy) and some actions ('filtering user prompts', 'handling moderation'), but doesn't list specific concrete actions like blocking categories, flagging outputs, or returning policy violation messages. | 2 / 3 |
Completeness | Clearly answers both 'what' (implement content policy compliance for Kling AI prompts and outputs) and 'when' (use when filtering user prompts or handling moderation), with explicit trigger phrases provided. | 3 / 3 |
Trigger Term Quality | Includes natural trigger terms like 'klingai content policy', 'kling ai moderation', 'safe video generation', 'klingai content filter', plus general terms like 'filtering user prompts' and 'moderation' that users would naturally say. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive due to the specific 'Kling AI' branding and the narrow focus on content policy compliance for that platform. Unlikely to conflict with generic moderation or other AI platform skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable skill with executable Python code that covers pre-submission filtering, safe defaults, client integration, and server-side rejection handling. Its main weaknesses are the lack of an explicit end-to-end workflow with validation checkpoints and some content that could be more concisely organized or split into referenced files. The user-facing guidelines section adds relatively little value for Claude.
Suggestions
Add an explicit numbered workflow showing the full sequence: pre-filter → submit → poll status → handle rejection → retry with revised prompt, including validation checkpoints
Move the restricted content categories table and blocked patterns/terms into a referenced file (e.g., BLOCKED_CONTENT.md) to keep the main skill leaner
Remove or significantly trim the 'User-Facing Guidelines' section as it contains generic advice Claude already knows
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient but includes some unnecessary elements. The restricted content categories table is useful but the user-facing guidelines section at the end is somewhat generic advice Claude already knows. The code examples are reasonably tight but could be slightly more compact. | 2 / 3 |
Actionability | The skill provides fully executable, copy-paste ready Python code including a PromptFilter class, safe request builder, SafeKlingClient wrapper, and server-side rejection handler. All code is concrete with specific patterns, terms, and integration examples. | 3 / 3 |
Workflow Clarity | The workflow is implicitly clear (filter → submit → handle rejection), but there's no explicit sequenced workflow with validation checkpoints. The user-facing guidelines list steps but they're advisory rather than a concrete process. Missing explicit feedback loop for when server-side rejection occurs despite passing the pre-filter. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear section headers and a logical progression from categories to filtering to integration. However, at ~120 lines of code-heavy content, some sections (like the full PromptFilter class or the restricted categories table) could be split into referenced files. The resources section at the end is a good touch but minimal. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3e83543
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.