tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill klingai-performance-tuningOptimize Kling AI performance for speed and quality. Use when improving generation times, reducing costs, or enhancing output quality. Trigger with phrases like 'klingai performance', 'kling ai optimization', 'faster klingai', 'klingai quality settings'.
Validation
81%| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Implementation
22%This skill is a skeleton that lacks substantive content. It provides generic workflow steps without any concrete code, specific Kling AI settings, actual optimization techniques, or executable examples. The skill tells Claude to optimize performance without explaining how to do so.
Suggestions
Add concrete, executable Python code examples showing specific Kling AI optimization techniques (e.g., batch processing, caching implementations, specific API parameters for speed vs quality tradeoffs)
Replace generic steps with specific actions: instead of 'Benchmark Baseline: Measure current performance', show actual timing code and what metrics to capture
Include specific Kling AI configuration parameters and their performance impacts (e.g., resolution settings, model selection, timeout configurations)
Add a validation step showing how to verify optimizations worked, with expected performance improvement ranges
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is relatively brief but includes some unnecessary padding like 'This skill demonstrates' and generic prerequisites. The actual actionable content is minimal for the token count used. | 2 / 3 |
Actionability | The skill provides only vague, abstract steps like 'Benchmark Baseline: Measure current performance' without any concrete code, commands, specific settings, or executable examples. It describes rather than instructs. | 1 / 3 |
Workflow Clarity | The 5 steps are generic placeholders without specific actions, validation checkpoints, or feedback loops. 'Apply Optimizations: Implement improvements' provides no actual guidance on what optimizations to apply or how. | 1 / 3 |
Progressive Disclosure | References to external files (errors.md, examples.md) are present and one-level deep, but the main content is so thin that it's essentially just a pointer to other files without providing a useful quick-start or overview. | 2 / 3 |
Total | 6 / 12 Passed |
Activation
75%The description effectively establishes when to use the skill with explicit triggers and clear use cases, making it complete and distinctive. However, it lacks specific concrete actions (what optimization techniques are applied) and has inconsistent trigger term formatting that could miss some user queries.
Suggestions
Add specific concrete actions like 'adjust resolution settings, configure batch processing, tune inference parameters' to improve specificity
Normalize trigger terms and add natural variations: 'kling ai slow', 'speed up kling', 'kling taking too long', 'kling quality issues'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Kling AI performance) and mentions general goals (speed, quality, reducing costs, generation times), but lacks specific concrete actions like 'adjust batch sizes', 'configure caching', or 'tune model parameters'. | 2 / 3 |
Completeness | Clearly answers both what (optimize Kling AI for speed and quality) and when (improving generation times, reducing costs, enhancing quality) with explicit trigger phrases provided. | 3 / 3 |
Trigger Term Quality | Includes some relevant keywords ('klingai performance', 'kling ai optimization', 'faster klingai', 'klingai quality settings') but inconsistent spacing (klingai vs kling ai) and missing natural variations users might say like 'speed up kling', 'kling slow', 'improve kling output'. | 2 / 3 |
Distinctiveness Conflict Risk | Very specific to Kling AI platform with distinct trigger terms; unlikely to conflict with other skills due to the unique product name and specific optimization context. | 3 / 3 |
Total | 10 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.