CtrlK
BlogDocsLog inGet started
Tessl Logo

klingai-debug-bundle

Set up logging and debugging for Kling AI API integrations. Use when troubleshooting video generation or building observability. Trigger with phrases like 'klingai debug', 'kling ai logging', 'klingai troubleshoot', 'debug kling video generation'.

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/klingai-pack/skills/klingai-debug-bundle/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description with excellent trigger terms and completeness. Its main weakness is that the 'what' portion could be more specific about the concrete actions performed (e.g., configuring log levels, tracing API calls, parsing error codes). The distinctiveness is strong due to the narrow Kling AI focus.

Suggestions

Add more specific concrete actions like 'configure log levels, trace API request/response cycles, parse error codes, set up webhook monitoring' to improve specificity.

DimensionReasoningScore

Specificity

Names the domain (Kling AI API integrations) and two actions (logging and debugging), but doesn't list specific concrete actions like 'configure log levels', 'trace API requests', 'inspect error responses', etc.

2 / 3

Completeness

Clearly answers both 'what' (set up logging and debugging for Kling AI API integrations) and 'when' (troubleshooting video generation, building observability) with explicit trigger phrases provided.

3 / 3

Trigger Term Quality

Includes multiple natural trigger phrases ('klingai debug', 'kling ai logging', 'klingai troubleshoot', 'debug kling video generation') and contextual terms like 'troubleshooting video generation' and 'observability' that users would naturally say.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with the specific 'Kling AI' niche and explicit trigger phrases that are unlikely to conflict with other skills. The combination of a specific API platform and debugging/logging focus creates a clear niche.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, highly actionable skill with fully executable code for debugging Kling AI integrations. Its main weakness is that it's somewhat monolithic—the large inline client class could be better organized via progressive disclosure—and it lacks an explicit step-by-step troubleshooting workflow that guides through common failure scenarios with validation checkpoints.

Suggestions

Add an explicit numbered troubleshooting workflow (e.g., 1. Run diagnostic script → 2. Check auth → 3. If auth OK, inspect task → 4. If failed, check error message → 5. Review debug log) with clear decision points.

Move the full KlingDebugClient class to a separate referenced file and keep only a concise summary and usage snippet in the main SKILL.md.

DimensionReasoningScore

Conciseness

The content is mostly efficient with executable code, but the full debug client class is quite lengthy. Some parts could be tightened—e.g., the JWT header construction is repeated in both the client and the diagnostic script, and the polling logic is verbose. However, it doesn't waste tokens explaining basic concepts.

2 / 3

Actionability

Fully executable Python code and bash scripts throughout. The debug client, usage example, diagnostic script, and task inspector are all copy-paste ready with concrete implementations, specific API endpoints, and real parameter values.

3 / 3

Workflow Clarity

The usage section shows a clear try/finally pattern ensuring logs are always saved, and the diagnostic script provides a validation checkpoint for credentials. However, there's no explicit multi-step troubleshooting workflow with sequenced steps—it's more a collection of tools than a guided debugging process with feedback loops for error recovery.

2 / 3

Progressive Disclosure

The content is well-sectioned with clear headers, but the full debug client implementation (~70 lines) is inline rather than referenced from a separate file. The skill is somewhat monolithic—the client code, diagnostic script, and task inspector could be split out with the SKILL.md serving as an overview. External resource links at the end are a good touch.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.