CtrlK
BlogDocsLog inGet started
Tessl Logo

review-cycle

Unified multi-dimensional code review with automated fix orchestration. Supports session-based (git changes) and module-based (path patterns) review modes with 7-dimension parallel analysis, iterative deep-dive, and automated fix pipeline. Triggers on "workflow:review-cycle", "workflow:review-session-cycle", "workflow:review-module-cycle", "workflow:review-cycle-fix".

78

Quality

73%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Risky

Do not use without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./.codex/skills/review-cycle/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

85%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is technically detailed and specific about capabilities, with clear trigger conditions and a distinct niche. Its main weakness is that the trigger terms are structured workflow commands rather than natural language phrases a user would organically use, which limits discoverability in natural conversation. The description is well-structured but assumes users know the exact workflow command syntax.

Suggestions

Add natural language trigger terms alongside the workflow commands, e.g., 'Use when the user asks for code review, wants to review git changes, needs automated code fixes, or mentions reviewing a module'

Include common user phrasings like 'review my code', 'check for issues', 'fix code problems' to improve matching against natural user requests

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: session-based and module-based review modes, 7-dimension parallel analysis, iterative deep-dive, automated fix pipeline, and fix orchestration. These are concrete, specific capabilities.

3 / 3

Completeness

Clearly answers both 'what' (multi-dimensional code review with automated fix orchestration, session-based and module-based modes, 7-dimension analysis, fix pipeline) and 'when' (explicit triggers listed with 'Triggers on' clause specifying the exact workflow commands).

3 / 3

Trigger Term Quality

The trigger terms are workflow-prefixed command strings ('workflow:review-cycle', etc.) rather than natural language terms a user would say. A user is more likely to say 'review my code', 'code review', 'fix issues' than these structured workflow commands. Some relevant keywords like 'review', 'fix', and 'git changes' appear in the body but aren't positioned as natural trigger terms.

2 / 3

Distinctiveness Conflict Risk

The description carves out a very specific niche with its unique workflow trigger commands, specific review modes (session-based vs module-based), and the 7-dimension parallel analysis concept. This is unlikely to conflict with generic code review or linting skills.

3 / 3

Total

11

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill excels at actionability and workflow clarity with concrete code, clear phase sequencing, explicit decision points, and comprehensive error handling. However, it is severely bloated — the same information is presented in multiple formats (architecture diagram, execution flow, data flow), and large sections like the subagent API reference, progress tracking patterns, and error tables should be in referenced files rather than inline. The skill would benefit greatly from being trimmed to an overview that delegates detail to its phase documents.

Suggestions

Move the Subagent API Reference, Progress Tracking Pattern, and Error Handling sections to separate referenced files (e.g., api-reference.md, error-handling.md) to reduce SKILL.md to a true orchestrator overview.

Consolidate the three redundant representations (Architecture Overview diagram, Execution Flow, Data Flow) into a single concise execution flow with phase references.

Move the Phase 7.5 inline implementation to its own phase document (phases/07.5-export-task-json.md) consistent with how all other phases are structured.

DimensionReasoningScore

Conciseness

Extremely verbose at ~400+ lines. Contains massive ASCII diagrams, redundant data flow representations (architecture diagram, execution flow, and data flow all repeat similar information), extensive progress tracking boilerplate, and detailed error tables. Much of this could be in referenced phase docs. The subagent API reference section explains basic API usage Claude could learn from a brief example.

1 / 3

Actionability

Highly actionable with executable JavaScript code for mode detection, spawn_agent calls, wait_agent with timeout handling, progress tracking patterns, and concrete CLI examples. The Phase 7.5 export logic includes complete executable code with guard clauses and JSON schema mapping.

3 / 3

Workflow Clarity

Excellent multi-step workflow with clear phase sequencing, explicit conditional branching (Phase 3 → Phase 4 or Phase 5), validation checkpoints (Phase 3 aggregation decision criteria, Phase 8 test pass rate requirement), error recovery tables with blocking/non-blocking classification, and feedback loops (Phase 4 loops back to Phase 3). The CLI fallback chain provides degradation handling.

3 / 3

Progressive Disclosure

References 9 phase documents appropriately with a clear table mapping phases to files and load conditions, which is good. However, the SKILL.md itself is monolithic — it inlines the full subagent API reference, complete progress tracking patterns, detailed error handling tables, and the entire Phase 7.5 implementation that could live in their own referenced files. The architecture overview, execution flow, and data flow sections are largely redundant with each other.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (509 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
catlog22/Claude-Code-Workflow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.