CtrlK
BlogDocsLog inGet started
Tessl Logo

workflow-execute

Autonomous workflow execution pipeline with CSV wave engine. Session discovery → plan validation → IMPL-*.json → CSV conversion → wave execution via spawn_agents_on_csv → results sync. Task JSONs remain the rich data source; CSV is brief + execution state.

46

Quality

36%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.codex/skills/workflow-execute/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

17%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description reads like internal technical documentation rather than a skill selection guide. It focuses on implementation details (JSON files, CSV conversion, specific function names) rather than describing what problems it solves or when Claude should select it. The lack of natural trigger terms and an explicit 'Use when...' clause makes it poorly suited for skill selection among many options.

Suggestions

Add an explicit 'Use when...' clause describing the user scenarios that should trigger this skill, e.g., 'Use when the user wants to orchestrate multi-step autonomous task execution across multiple agents' or similar natural language triggers.

Replace or supplement internal jargon (IMPL-*.json, spawn_agents_on_csv) with user-facing language describing the capability, e.g., 'Breaks complex projects into parallelizable task waves and coordinates agent execution across them.'

Include natural trigger terms users might say, such as 'batch tasks', 'parallel execution', 'multi-agent workflow', 'orchestrate tasks', or 'automated pipeline'.

DimensionReasoningScore

Specificity

The description names specific artifacts and steps (IMPL-*.json, CSV conversion, spawn_agents_on_csv, wave execution), but the actions are described as a pipeline rather than listing concrete user-facing capabilities. It's more of an internal architecture description than a capability list.

2 / 3

Completeness

While there is a partial 'what' (describes a pipeline), there is no explicit 'when' clause or trigger guidance. The description reads like internal documentation rather than a skill selection guide. The missing 'Use when...' clause caps this at 2 per the rubric, but the 'what' is also weak enough to warrant a 1.

1 / 3

Trigger Term Quality

The terms used are highly technical and internal ('IMPL-*.json', 'spawn_agents_on_csv', 'wave engine', 'results sync'). Users would not naturally say these phrases when requesting help. Common natural language triggers like 'run tasks', 'batch processing', or 'automate workflow' are absent.

1 / 3

Distinctiveness Conflict Risk

The mention of specific artifacts like 'IMPL-*.json', 'CSV wave engine', and 'spawn_agents_on_csv' provides some distinctiveness, but the overall framing as 'autonomous workflow execution' is broad enough to potentially overlap with other automation or orchestration skills.

2 / 3

Total

6

/

12

Passed

Implementation

55%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill excels at actionability and workflow clarity — the 6-phase pipeline is well-sequenced with validation checkpoints, error handling, and fully executable code. However, it is severely undermined by its monolithic structure and extreme verbosity: hundreds of lines of implementation code (CSV parsers, BFS algorithms, instruction templates) are inlined when they should be in separate files, and much of the code implements patterns Claude already knows. The token cost is very high relative to the unique knowledge conveyed.

Suggestions

Extract the full implementation code (Phase 3-6 JavaScript, CSV helpers, instruction template) into separate referenced files (e.g., EXECUTE-IMPL.js, CSV-HELPERS.js, AGENT-INSTRUCTION.md) and keep SKILL.md as a concise overview with the pipeline diagram, CSV schemas, core rules, and file references.

Remove the detailed 21-column CSV schema table and replace with a brief description of the key columns (task_json_path, wave, status, prev_context) — Claude can infer standard columns from the header line and code.

Trim the instruction template to its essential structure and unique directives; Claude doesn't need verbose step-by-step instructions for reading files and writing JSON results spelled out in full.

Move the CSV helper functions (parseCsv, parseCsvLine, updateMasterCsvRow) to a utility file reference — these are generic implementations Claude can produce on demand.

DimensionReasoningScore

Conciseness

This skill is extremely verbose at 600+ lines, with massive inline code blocks that could be in separate files. The full implementation code for every phase, CSV helpers, instruction templates, and detailed column-by-column schema tables consume enormous token budget. Much of this (CSV parsing, Kahn's BFS, regex escaping) is knowledge Claude already has.

1 / 3

Actionability

The skill provides fully executable JavaScript code for every phase, complete with specific commands, function implementations, CSV schemas, output schemas for spawn_agents_on_csv, and concrete instruction templates. Everything is copy-paste ready.

3 / 3

Workflow Clarity

The 6-phase pipeline is clearly sequenced with an excellent ASCII diagram overview, explicit validation in Phase 2, dependency cascade handling (skip on failure), per-wave re-reading of master CSV as source of truth, and feedback loops for error recovery. The resume mode entry point is clearly defined.

3 / 3

Progressive Disclosure

This is a monolithic wall of text with all implementation details inline. The full Phase 3 conversion code, Phase 4 execution code, CSV helpers, instruction templates, and column schemas should be in separate referenced files. No bundle files are provided, and no content is split out despite the skill being 600+ lines.

1 / 3

Total

8

/

12

Passed

Validation

72%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation8 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (1118 lines); consider splitting into references/ and linking

Warning

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

8

/

11

Passed

Repository
catlog22/Claude-Code-Workflow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.