CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

klingai-content-policy

tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill klingai-content-policy

Implement content policy compliance for Kling AI. Use when ensuring generated content meets guidelines or filtering inappropriate prompts. Trigger with phrases like 'klingai content policy', 'kling ai moderation', 'safe video generation', 'klingai content filter'.

58%

Overall

SKILL.md
Review
Evals

Validation

81%
CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

metadata_version

'metadata' field is not a dictionary

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

13

/

16

Passed

Implementation

22%

This skill is essentially a skeleton with no actionable content. It describes what content policy compliance involves at a high level but provides zero concrete implementation guidance, code examples, or specific techniques for prompt filtering or moderation. The instructions read like a table of contents rather than executable guidance.

Suggestions

Add concrete code examples for prompt filtering (e.g., regex patterns, keyword lists, or API calls to moderation services)

Include specific examples of policy-violating prompts and how to detect/handle them programmatically

Provide executable code for integrating with Kling AI's content policy checks or third-party moderation APIs

Add validation steps showing how to verify content compliance before and after generation

DimensionReasoningScore

Conciseness

The content is relatively brief but includes some unnecessary padding like 'This skill teaches how to implement' and generic prerequisites. The actual actionable content is thin relative to the framing.

2 / 3

Actionability

The instructions are entirely abstract ('Review Policies', 'Implement Filters', 'Add Moderation') with no concrete code, commands, or specific implementation details. There's nothing executable or copy-paste ready.

1 / 3

Workflow Clarity

Steps are vague placeholders without any validation checkpoints. For content moderation involving potentially risky operations, there are no feedback loops or concrete verification steps.

1 / 3

Progressive Disclosure

References to external files (errors.md, examples.md) are present and one-level deep, but the main content is so thin that it's unclear what value this file provides as an overview. The references use template variables that may not resolve.

2 / 3

Total

6

/

12

Passed

Activation

90%

This is a well-structured skill description with strong trigger terms and clear when-to-use guidance. The main weakness is the somewhat vague capability description - it mentions 'ensuring guidelines' and 'filtering' but doesn't specify concrete actions like blocking, flagging, rewriting, or what types of content violations it handles.

Suggestions

Add specific concrete actions such as 'blocks violent content', 'flags policy violations', 'rewrites non-compliant prompts', or 'validates against NSFW guidelines' to improve specificity.

DimensionReasoningScore

Specificity

Names the domain (content policy compliance for Kling AI) and mentions some actions (ensuring content meets guidelines, filtering inappropriate prompts), but lacks specific concrete actions like what filtering entails or what compliance checks are performed.

2 / 3

Completeness

Clearly answers both what (implement content policy compliance, ensure guidelines met, filter inappropriate prompts) and when (explicit 'Use when' clause and 'Trigger with phrases' providing clear activation guidance).

3 / 3

Trigger Term Quality

Includes good coverage of natural trigger terms: 'klingai content policy', 'kling ai moderation', 'safe video generation', 'klingai content filter'. These are terms users would naturally use when needing this functionality.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with specific platform focus (Kling AI) and clear niche (content policy/moderation). The combination of 'klingai' with content moderation terms makes it unlikely to conflict with generic content moderation or other AI platform skills.

3 / 3

Total

11

/

12

Passed

Reviewed

Table of Contents

ValidationImplementationActivation

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.