CtrlK
BlogDocsLog inGet started
Tessl Logo

klingai-content-policy

Implement content policy compliance for Kling AI. Use when ensuring generated content meets guidelines or filtering inappropriate prompts. Trigger with phrases like 'klingai content policy', 'kling ai moderation', 'safe video generation', 'klingai content filter'.

Install with Tessl CLI

npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill klingai-content-policy
What are skills?

63

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured skill description with strong trigger terms and clear when-to-use guidance. The main weakness is the somewhat vague capability description - it mentions 'content policy compliance' and 'filtering' but doesn't specify concrete actions like blocking, flagging, rewriting, or what specific policy checks are performed.

Suggestions

Add specific concrete actions like 'blocks prohibited content categories', 'flags policy violations', or 'rewrites prompts to comply with guidelines' to improve specificity

DimensionReasoningScore

Specificity

Names the domain (content policy compliance for Kling AI) and mentions some actions (ensuring content meets guidelines, filtering inappropriate prompts), but lacks specific concrete actions like what filtering entails or what compliance checks are performed.

2 / 3

Completeness

Clearly answers both what (implement content policy compliance, ensure guidelines met, filter inappropriate prompts) and when (explicit 'Use when' clause with trigger phrases). The 'Trigger with phrases like' provides explicit guidance for skill selection.

3 / 3

Trigger Term Quality

Includes good coverage of natural trigger terms: 'klingai content policy', 'kling ai moderation', 'safe video generation', 'klingai content filter'. These are specific phrases users would naturally use when needing this functionality.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with Kling AI-specific terminology. The combination of 'klingai', 'kling ai', and video generation context creates a clear niche that is unlikely to conflict with generic content moderation or other AI platform skills.

3 / 3

Total

11

/

12

Passed

Implementation

22%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a placeholder that describes what content policy compliance involves without providing any actionable implementation guidance. It lacks executable code, concrete examples, specific API calls, or detailed workflows. The content offloads everything meaningful to external references while the main file contains only abstract descriptions.

Suggestions

Add executable Python code examples showing how to implement prompt filtering with Kling AI's API, including actual API calls and response handling

Include a concrete content moderation workflow with validation checkpoints, such as: check prompt -> call API -> validate response -> handle violations with specific error codes

Provide at least one inline example of a policy violation scenario with the exact code to detect and handle it, rather than deferring all examples to external files

Replace abstract steps like 'Implement Filters' with specific implementation details including code snippets, API endpoints, and expected request/response formats

DimensionReasoningScore

Conciseness

The content is relatively brief but includes some unnecessary padding like 'This skill teaches how to' and generic prerequisites. The actual actionable content is minimal for the token count used.

2 / 3

Actionability

The skill provides only vague, abstract guidance with no concrete code, commands, or executable examples. Steps like 'Implement Filters: Add prompt screening' describe rather than instruct, offering no actual implementation details.

1 / 3

Workflow Clarity

The 5 steps are extremely high-level with no validation checkpoints, no feedback loops for handling violations, and no concrete sequence for the multi-step content moderation process. Critical operations like filtering and moderation lack any verification steps.

1 / 3

Progressive Disclosure

References to external files (errors.md, examples.md) are present and one-level deep, but the main content is too sparse to serve as a useful overview. The skill offloads all concrete guidance to external files without providing any substantive quick-start content.

2 / 3

Total

6

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.