CtrlK
BlogDocsLog inGet started
Tessl Logo

workflow-lite-execute

Lightweight execution engine - multi-mode input, task grouping, batch execution, chain to workflow-lite-test-review

47

Quality

36%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Risky

Do not use without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./.claude/skills/workflow-lite-execute/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

17%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description reads like an internal technical specification rather than a skill description designed for selection. It relies heavily on jargon ('execution engine', 'multi-mode input', 'batch execution') without explaining what concrete tasks it performs or when it should be selected. The absence of natural trigger terms and a 'Use when...' clause makes it very difficult for Claude to correctly identify when to use this skill.

Suggestions

Add a 'Use when...' clause with natural trigger terms describing scenarios where this skill should be selected, e.g., 'Use when the user wants to run multiple tasks in sequence, batch process items, or chain execution steps together.'

Replace technical jargon with concrete, user-facing actions - instead of 'multi-mode input' and 'execution engine', describe what the user actually does, e.g., 'Accepts tasks from files, lists, or inline input and executes them in grouped batches.'

Include natural keywords users might say, such as 'run tasks', 'batch process', 'execute steps', 'automate workflow', or 'run multiple commands'.

DimensionReasoningScore

Specificity

Names some actions like 'task grouping', 'batch execution', and 'chain to workflow-lite-test-review', but these are technical jargon rather than concrete user-facing actions. 'Multi-mode input' and 'lightweight execution engine' are abstract descriptors rather than specific capabilities.

2 / 3

Completeness

The description vaguely addresses 'what' with technical terms but provides no 'when' guidance whatsoever. There is no 'Use when...' clause or equivalent explicit trigger guidance, which per the rubric should cap completeness at 2, but the 'what' is also weak, warranting a 1.

1 / 3

Trigger Term Quality

The terms used ('execution engine', 'multi-mode input', 'task grouping', 'batch execution') are internal/technical jargon that users would almost never naturally say. There are no natural language trigger terms a user would use when needing this skill.

1 / 3

Distinctiveness Conflict Risk

The mention of 'workflow-lite-test-review' chaining and 'batch execution' provides some specificity that distinguishes it from generic skills, but 'execution engine' and 'task grouping' are broad enough to overlap with many workflow or automation skills.

2 / 3

Total

6

/

12

Passed

Implementation

55%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is highly actionable with excellent workflow clarity, providing concrete executable code for a complex multi-mode execution engine with proper error handling and resume capabilities. However, it is severely over-engineered for a SKILL.md — the ~400+ lines of detailed implementation code (dependency resolution algorithms, prompt template builders, full data structure schemas) make it a monolithic reference document rather than a concise skill guide. Much of the internal logic could be extracted to supporting files or condensed, as Claude can infer implementation details from high-level specifications.

Suggestions

Extract the detailed code implementations (buildExecutionPrompt, createExecutionCalls, extractDependencies) into a separate reference file like INTERNALS.md or EXECUTION-ENGINE.md, keeping only high-level flow descriptions in SKILL.md.

Move the Data Structures section to a separate SCHEMA.md file and reference it, as it largely duplicates information already visible in the code blocks.

Condense the three code review variants (Agent/Codex/Gemini) into a summary table with key differences, moving full prompt templates to a CODE-REVIEW.md reference file.

Remove implementation details Claude can infer (e.g., the full dependency resolution algorithm, JSON parsing try/catch blocks) and replace with concise behavioral descriptions.

DimensionReasoningScore

Conciseness

Extremely verbose at ~400+ lines. Massive code blocks detail internal implementation logic (dependency resolution, batch creation, prompt building) that Claude doesn't need spelled out at this granularity. The data structures section repeats information already shown in the code. Much of this could be condensed significantly.

1 / 3

Actionability

The skill provides fully concrete, executable code for every step — input parsing, task grouping, batch execution, code review, and chaining. Commands, function signatures, and tool invocations are all specific and copy-paste ready with clear routing logic.

3 / 3

Workflow Clarity

The multi-step execution flow is clearly sequenced (Initialize → Group/Batch → Execute → Code Review → Chain to Test Review) with explicit validation checkpoints, resume-on-failure handling with fixed IDs, progress tracking via TodoWrite, and a clear error handling table. The checkpoint reminder to re-read phase documentation is a good feedback loop.

3 / 3

Progressive Disclosure

This is a monolithic wall of text with no references to supporting files despite being clearly part of a larger system (references workflow-lite-plan, workflow-lite-test-review, phases/02-lite-execute.md). The data structures, prompt builder, and detailed code review templates could all be split into separate reference files. No bundle files are provided to offload content to.

1 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (568 lines); consider splitting into references/ and linking

Warning

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

Total

9

/

11

Passed

Repository
catlog22/Claude-Code-Workflow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.