CtrlK
BlogDocsLog inGet started
Tessl Logo

analyze-with-file

Interactive collaborative analysis with documented discussions, inline exploration, and evolving understanding.

34

Quality

19%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.codex/skills/analyze-with-file/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is extremely vague and reads as a string of abstract buzzwords without any concrete actions, specific domains, or trigger guidance. It fails to communicate what the skill actually does, when it should be selected, or how it differs from other skills. A user or Claude would have no reliable basis for choosing this skill over any other.

Suggestions

Replace abstract language with concrete actions—specify exactly what this skill does (e.g., 'Conducts structured analysis sessions with documented reasoning chains, annotated data exploration, and iterative hypothesis refinement').

Add an explicit 'Use when...' clause with natural trigger terms that describe the situations or user requests that should activate this skill (e.g., 'Use when the user asks for a deep-dive analysis, wants to explore data collaboratively, or requests documented reasoning').

Clarify the domain or context to make the skill distinctive—what kind of analysis? What kind of discussions? What format does the output take? This will reduce conflict risk with other analytical skills.

DimensionReasoningScore

Specificity

The description uses vague, abstract language like 'interactive collaborative analysis', 'documented discussions', 'inline exploration', and 'evolving understanding' without naming any concrete actions. There are no specific capabilities listed—no verbs describing what the skill actually does.

1 / 3

Completeness

The description vaguely gestures at 'what' (collaborative analysis) but provides no explicit 'when' clause or trigger guidance. Both the what and when are extremely weak and unclear.

1 / 3

Trigger Term Quality

The terms used ('collaborative analysis', 'documented discussions', 'inline exploration', 'evolving understanding') are abstract buzzwords that users would rarely naturally say when requesting help. There are no concrete trigger terms a user would type.

1 / 3

Distinctiveness Conflict Risk

The description is so generic and abstract that it could overlap with virtually any analytical, discussion-based, or exploratory skill. There is nothing that carves out a clear niche or distinguishes it from other skills.

1 / 3

Total

4

/

12

Passed

Implementation

39%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill has an exceptionally well-designed workflow with clear phases, mandatory validation gates, and thorough feedback loops, demonstrating strong workflow clarity. However, it is severely undermined by extreme verbosity (~600+ lines in a single file), with extensive pseudocode implementations and reference tables that should be split into separate files. The content would benefit enormously from aggressive condensation and progressive disclosure into linked reference documents.

Suggestions

Reduce the main skill body to ~100-150 lines covering the core workflow, recording protocol summary, and quick start. Move Implementation Details (Phase 0-5 pseudocode), Reference tables (dimensions, perspectives, depth levels, dimension-direction mapping), and Templates into separate linked files.

Remove JavaScript pseudocode for obvious operations (flag parsing, session ID generation, folder creation) and replace with brief imperative instructions — Claude can implement these without step-by-step code.

Convert the undefined function calls (identifyDimensions, request_user_input, assessCoverage, generateFocusOptions) into either real executable code or replace with clear prose instructions describing what to do.

Extract the Recording Protocol formats, Consolidation Rules, and Round Documentation Pattern into a single TEMPLATES.md reference file linked from the main skill.

DimensionReasoningScore

Conciseness

This skill is extremely verbose at ~600+ lines. It over-explains every phase with extensive JavaScript pseudocode, detailed tables for every configuration option, and redundant template definitions. Much of this content (JSON schema structures, iteration logic, error handling tables) could be dramatically condensed. Claude doesn't need step-by-step JavaScript implementations of session ID generation or flag parsing.

1 / 3

Actionability

The skill provides concrete code snippets and structured workflows, but the JavaScript code is pseudocode (e.g., `identifyDimensions()`, `request_user_input()`, `assessCoverage()` are undefined functions). The bash quick-start examples use a `/codex:analyze-with-file` invocation pattern but the actual execution mechanics are unclear. Templates and formats are well-defined but not truly executable.

2 / 3

Workflow Clarity

The multi-step workflow is exceptionally well-sequenced with 6 clear phases (0-5), explicit success criteria per phase, mandatory gates (Intent Coverage Verification, Findings-to-Recommendations Traceability), validation checkpoints (Intent Drift Check every round >= 2), and clear feedback loops (discuss → refine → repeat up to 5 rounds). Error recovery is documented in a dedicated table.

3 / 3

Progressive Disclosure

This is a monolithic wall of text with everything inline. The Reference section alone contains 7 detailed tables that could be in separate files. The Recording Protocol, Templates, Implementation Details for each phase, and all reference tables are all crammed into a single document with no external file references. Internal anchor links exist but don't substitute for proper content splitting.

1 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (967 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
catlog22/Claude-Code-Workflow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.