CtrlK
BlogDocsLog inGet started
Tessl Logo

project-documentation-workflow

Wave-based comprehensive project documentation generator with dynamic task decomposition. Analyzes project structure and generates appropriate documentation tasks, computes optimal execution waves via topological sort, produces complete documentation suite including architecture, methods, theory, features, usage, and design philosophy.

52

Quality

43%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Risky

Do not use without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./.codex/skills/project-documentation-workflow/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

60%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is strong on specificity, listing concrete outputs and methods, but lacks an explicit 'Use when...' clause that would help Claude know when to select this skill. The language leans heavily on technical implementation details (wave-based, topological sort) rather than natural user trigger terms, which reduces its effectiveness for skill selection from a large pool.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to document a project, generate project docs, create a documentation suite, or needs comprehensive technical documentation for a codebase.'

Include more natural user-facing trigger terms like 'docs', 'README', 'document my code', 'project documentation', 'technical writing' alongside the technical implementation details.

Reduce implementation jargon ('topological sort', 'wave-based', 'dynamic task decomposition') in favor of outcome-oriented language that users would recognize.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: analyzes project structure, generates documentation tasks, computes optimal execution waves via topological sort, produces documentation suite including architecture, methods, theory, features, usage, and design philosophy.

3 / 3

Completeness

Clearly answers 'what does this do' with detailed capability descriptions, but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which per the rubric caps completeness at 2.

2 / 3

Trigger Term Quality

Contains some relevant keywords like 'documentation', 'project structure', 'architecture', but uses technical jargon ('topological sort', 'wave-based', 'dynamic task decomposition') that users wouldn't naturally say. Missing common user terms like 'docs', 'README', 'document my project', 'generate docs'.

2 / 3

Distinctiveness Conflict Risk

The 'wave-based' and 'topological sort' aspects are distinctive implementation details, but 'project documentation generator' could overlap with simpler documentation skills. The scope is somewhat specific but 'documentation' is a broad category that could conflict with other doc-related skills.

2 / 3

Total

9

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a comprehensive but excessively verbose documentation workflow that attempts to encode an entire multi-agent orchestration system in a single file. While the workflow design is thoughtful (dynamic decomposition, topological sorting, inter-wave synthesis), the implementation suffers from being monolithic, relying on undefined runtime primitives, and lacking proper validation checkpoints. The mixed Chinese/English content and extensive inline code make it difficult to parse quickly.

Suggestions

Split the monolithic content into separate files: move the topological sort algorithm, instruction template, wave summary generation, and results aggregation into referenced files, keeping SKILL.md as a concise overview with usage examples and phase descriptions.

Remove or drastically reduce verbose sections like the optimization comparison table, the large ASCII diagram, and inline explanations — focus on the essential workflow steps and defer implementation details to bundle files.

Define or document the runtime environment explicitly — clarify what Write, Read, Bash, spawn_agents_on_csv, parseCsv, toCsv, and $ARGUMENTS are, or reference a runtime specification file.

Add explicit validation checkpoints: verify Phase 0 analysis output is valid JSON before proceeding, validate generated documents against doc_sections requirements, and add a recovery path for failed tasks beyond just skipping dependents.

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines with extensive inline code that could be modularized. Includes redundant explanations, mixed Chinese/English commentary, and large code blocks that repeat concepts. The optimization comparison table and ASCII diagrams add bulk without proportional value.

1 / 3

Actionability

Provides substantial code examples that appear executable, but relies on undefined functions (parseCsv, toCsv, fileExists, Write, Read, spawn_agents_on_csv) and an unclear runtime environment (ccw cli, $ARGUMENTS). The code is illustrative but not truly copy-paste ready without significant context about the execution framework.

2 / 3

Workflow Clarity

The three-phase workflow is clearly sequenced with a good ASCII overview diagram, dependency checking, and wave summaries between waves. However, validation/error recovery is incomplete — failed task handling just skips dependents with no recovery path, there's no validation of generated documents, and the Phase 0 analysis step runs in background with no explicit wait/check for completion before parsing results.

2 / 3

Progressive Disclosure

All content is monolithically inlined in a single massive file with no references to supporting files. The CSV schema, topological sort algorithm, instruction template, wave summary generation, and results aggregation could all be separate referenced files. No bundle files are provided despite the complexity warranting them.

1 / 3

Total

6

/

12

Passed

Validation

72%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation8 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (809 lines); consider splitting into references/ and linking

Warning

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

8

/

11

Passed

Repository
catlog22/Claude-Code-Workflow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.