Use when Code implementation and refactoring, architecturing or designing systems, process and workflow improvements, error handling and validation. Provide tehniquest to avoid over-engineering and apply iterative improvements.
47
34%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/kaizen/skills/kaizen/SKILL.mdQuality
Discovery
42%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description attempts to cover too many broad software engineering topics without establishing a clear, distinct identity. While it includes a 'Use when' clause, the triggers are so wide-ranging that they would conflict with many other skills. The description also contains typos ('tehniquest') which reduce professionalism, and the capabilities listed are categories rather than concrete actions.
Suggestions
Narrow the scope to a specific niche (e.g., focus on 'avoiding over-engineering' and 'iterative refactoring' as the core identity) and list 2-3 concrete actions the skill performs, such as 'Simplifies overly complex code structures, identifies unnecessary abstractions, and suggests incremental improvement steps.'
Separate the 'what' from the 'when' — first describe what the skill does, then add a distinct 'Use when...' clause with specific trigger scenarios like 'Use when the user asks about simplifying code, reducing complexity, or applying YAGNI/KISS principles.'
Fix the typo 'tehniquest' → 'techniques' and reduce the breadth of topics to avoid conflicting with general coding, architecture, or error-handling skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names a domain (code/systems) and lists some actions like 'implementation and refactoring', 'architecturing or designing systems', 'error handling and validation', but these are broad categories rather than concrete specific actions. The mention of 'techniques to avoid over-engineering' adds some specificity but remains somewhat vague. | 2 / 3 |
Completeness | Starts with 'Use when' which addresses the 'when' aspect, but the 'what' and 'when' are conflated into a single blurred statement. The 'what does this do' is only implied through the trigger conditions rather than explicitly stated as capabilities. The 'Use when' clause exists but functions more as a topic list than explicit trigger guidance. | 2 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'refactoring', 'error handling', 'validation', 'over-engineering', and 'iterative improvements' that users might naturally say. However, it misses many common variations and the terms are fairly broad, covering a huge surface area without precise trigger terms. | 2 / 3 |
Distinctiveness Conflict Risk | The description covers an extremely broad range — code implementation, refactoring, system architecture, process improvements, error handling, and validation — which would overlap with virtually any coding or software engineering skill. There is no clear niche that distinguishes this from general programming assistance. | 1 / 3 |
Total | 7 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill teaches well-known software engineering principles (YAGNI, error-proofing, iterative improvement) that Claude already understands, resulting in significant token waste. While the code examples are concrete and well-structured, the content is far too verbose for a skill file — it reads more like a tutorial or textbook chapter than a concise reference. The lack of progressive disclosure means all content is inlined in one massive file rather than being appropriately split across supporting documents.
Suggestions
Reduce content by 70-80%: Remove explanations of concepts Claude already knows (YAGNI, guard clauses, Result types, iterative refinement) and keep only project-specific conventions or non-obvious patterns.
Split detailed code examples into separate reference files (e.g., POKA-YOKE-EXAMPLES.md, JIT-EXAMPLES.md) and keep SKILL.md as a concise overview with links.
Remove or drastically shorten the Good/Bad example pairs — one brief example per pillar is sufficient since Claude understands these patterns.
Add concrete validation checkpoints to the workflows, e.g., 'After each iteration, run tests before proceeding' with specific commands rather than generic advice.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~400+ lines, explaining well-known software engineering principles (YAGNI, DRY, guard clauses, Result types, iterative refinement) that Claude already knows. Concepts like 'validate before use' and 'don't prematurely optimize' are basic knowledge that don't need extensive code examples and explanations. The content could be reduced to ~20% of its size while preserving all actionable value. | 1 / 3 |
Actionability | The code examples are concrete and executable TypeScript, which is good. However, the skill is more of a philosophy/mindset guide than actionable instructions — it describes general principles rather than giving specific steps for specific tasks. The 'In Practice' sections provide some guidance but remain fairly generic checklists rather than precise, executable workflows. | 2 / 3 |
Workflow Clarity | The iterative refinement workflow (make it work → make it clear → make it efficient) is clearly sequenced, and the 'In Practice' sections provide ordered steps. However, there are no explicit validation checkpoints or feedback loops for the workflows described. The commands section references structured analysis tools but doesn't explain when to use them in a workflow sequence. | 2 / 3 |
Progressive Disclosure | The entire skill is a monolithic wall of text with no references to external files for detailed content. The extensive code examples for each pillar could be split into separate reference files. The commands section references /why, /cause-and-effect, etc. but doesn't link to any supporting documentation. Everything is inlined, making the skill very long for what should be an overview. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (734 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
dedca19
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.