CtrlK
BlogDocsLog inGet started
Tessl Logo

plan-do-check-act

Iterative PDCA cycle for systematic experimentation and continuous improvement

53

Quality

42%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/kaizen/skills/plan-do-check-act/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

22%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is too abstract and lacks actionable detail. It names the PDCA methodology but fails to describe concrete actions the skill performs or when it should be selected. Without a 'Use when...' clause and with only vague language about 'experimentation' and 'improvement', Claude would struggle to reliably choose this skill from a pool of alternatives.

Suggestions

Add a 'Use when...' clause with explicit trigger terms like 'when the user wants to run experiments', 'test hypotheses', 'improve a process', 'plan-do-check-act', or 'iterative problem solving'.

List specific concrete actions the skill performs, e.g., 'Defines hypotheses, plans experiments, tracks results, and recommends adjustments through Plan-Do-Check-Act iterations.'

Include natural language variations users might say, such as 'process improvement', 'A/B testing workflow', 'iterative refinement', or 'quality cycle'.

DimensionReasoningScore

Specificity

The description uses abstract language ('systematic experimentation', 'continuous improvement') without listing any concrete actions. It names a methodology (PDCA cycle) but doesn't describe what specific tasks it performs.

1 / 3

Completeness

The 'what' is vaguely stated and the 'when' is entirely missing. There is no 'Use when...' clause or equivalent trigger guidance, and the description doesn't clarify what situations should invoke this skill.

1 / 3

Trigger Term Quality

'PDCA cycle' is a relevant keyword for users familiar with the methodology, but it misses common natural language variations like 'plan-do-check-act', 'process improvement', 'experiment', 'iterate', 'hypothesis testing', or 'quality improvement'.

2 / 3

Distinctiveness Conflict Risk

'PDCA cycle' provides some distinctiveness as a specific methodology, but 'systematic experimentation and continuous improvement' is broad enough to overlap with general problem-solving, project management, or quality assurance skills.

2 / 3

Total

6

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured process skill with clear workflow phases and good branching logic in the Act phase, but it suffers from verbosity—particularly in the examples section, which dominates the document. The skill is more of a methodology template than an actionable instruction set, making it moderately actionable but not directly executable. The content would benefit from trimming examples and splitting detailed walkthroughs into a separate file.

Suggestions

Move the detailed multi-cycle examples into a separate EXAMPLES.md file and keep only one concise single-cycle example inline to demonstrate the format.

Add a concrete output template or structured format that Claude should produce when running a PDCA cycle, making the skill more directly actionable rather than illustrative.

Trim the Notes section—items like 'Failed experiments are learning opportunities' and 'PDCA is iterative—multiple cycles normal' are self-evident from the content and don't add value for Claude.

DimensionReasoningScore

Conciseness

The skill is quite lengthy, especially the examples section which takes up the bulk of the content. The three detailed examples are thorough but verbose—two multi-cycle examples would suffice. The core PDCA steps themselves are reasonably concise, but the overall document could be significantly tightened.

2 / 3

Actionability

The skill provides a clear framework and detailed examples, but it's a process/methodology skill with no executable code or concrete commands. The steps are structured but remain somewhat abstract ('Analyze current state', 'Identify root causes'). The examples are illustrative but are formatted as filled-in templates rather than actionable instructions Claude can directly execute.

2 / 3

Workflow Clarity

The four PDCA phases are clearly sequenced with explicit decision points in Phase 4 (Act) that determine whether to standardize, adjust, or start a new cycle. The branching logic for successful/unsuccessful/partially successful outcomes provides clear feedback loops, and the examples demonstrate multi-cycle iteration with validation at each Check phase.

3 / 3

Progressive Disclosure

The content is monolithic—all three extensive examples are inline rather than referenced from a separate file. The examples alone are ~100 lines and could be split into an EXAMPLES.md. References to other skills (`/why`, `/cause-and-effect`, `/analyse-problem`) are mentioned but the document itself has no structural separation of overview vs. detail.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
NeoLabHQ/context-engineering-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.