CtrlK
BlogDocsLog inGet started
Tessl Logo

analyze-with-file

Interactive collaborative analysis with documented discussions, inline exploration, and evolving understanding.

40

Quality

27%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./.codex/skills/analyze-with-file/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is extremely vague and abstract, reading more like a tagline than a functional skill description. It fails to specify what domain it operates in, what concrete actions it performs, and when Claude should select it. Without any specific triggers, concrete capabilities, or domain focus, this description would be nearly useless for skill selection among multiple options.

Suggestions

Specify the concrete domain and actions: what type of analysis is performed, what format are the 'documented discussions' in, and what outputs are produced (e.g., 'Creates annotated analysis documents with threaded discussion comments and iterative findings').

Add an explicit 'Use when...' clause with natural trigger terms that users would actually say, such as specific file types, task names, or workflow descriptions.

Narrow the scope to a distinct niche to avoid conflicting with other analytical skills—clarify whether this is for data analysis, research review, code analysis, or another specific domain.

DimensionReasoningScore

Specificity

The description uses vague, abstract language like 'interactive collaborative analysis', 'documented discussions', 'inline exploration', and 'evolving understanding' without naming any concrete actions. There are no specific operations like 'analyze data', 'generate reports', or 'create summaries'.

1 / 3

Completeness

The 'what' is extremely vague (collaborative analysis) and there is no 'when' clause at all. There are no explicit triggers or 'Use when...' guidance to help Claude know when to select this skill.

1 / 3

Trigger Term Quality

The terms used ('collaborative analysis', 'documented discussions', 'inline exploration', 'evolving understanding') are abstract buzzwords that users would rarely naturally say. A user would more likely say 'analyze this together', 'discuss this data', or reference a specific domain.

1 / 3

Distinctiveness Conflict Risk

The description is so generic that it could overlap with virtually any analytical or discussion-based skill. 'Interactive collaborative analysis' could apply to data analysis, code review, document review, research, or countless other domains.

1 / 3

Total

4

/

12

Passed

Implementation

55%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is remarkably thorough and actionable with excellent workflow clarity, explicit validation gates, and concrete executable guidance throughout. However, it is severely over-engineered for a single SKILL.md file — at 600+ lines it consumes enormous context window budget by inlining every JSON schema, markdown template, reference table, and recording format rather than splitting them into bundle files. The verbosity significantly undermines its practical utility despite the high quality of the actual instructions.

Suggestions

Extract JSON schemas (state.json, exploration-codebase.json, research.json, handoff.json) into separate reference files (e.g., schemas/state.schema.json) and reference them with one-line links from SKILL.md.

Move the discussion.md template structure, round template, and record formats into a separate TEMPLATES.md file, keeping only a brief summary in the main skill.

Move reference tables (Analysis Dimensions, Dimension-Direction Mapping, Perspectives, Depth Levels) into a REFERENCE.md file — these are lookup tables that don't need to be in the main flow.

Trim explanatory text that Claude already understands — e.g., remove detailed explanations of how weighted averages work, what confidence scoring measures, and how slug generation works. Replace with concise specifications only.

DimensionReasoningScore

Conciseness

This skill is extremely verbose at ~600+ lines with extensive JSON schemas, tables, templates, and detailed phase descriptions. Much of this content (e.g., explaining what confidence scoring is, how to generate slugs, detailed JSON schemas for every artifact) could be dramatically condensed. Claude doesn't need step-by-step explanations of how to compute weighted averages or how to format markdown tables — it already knows these things.

1 / 3

Actionability

The skill provides highly concrete, executable guidance: specific CLI commands (git rev-parse, ccw spec load), exact JSON schemas for every artifact, precise markdown templates for discussion.md, specific function calls (functions.request_user_input, functions.update_plan, web.run), and detailed trigger conditions with exact thresholds. Everything is copy-paste ready.

3 / 3

Workflow Clarity

The multi-phase workflow is exceptionally well-sequenced with explicit validation checkpoints at every stage: readiness gates before Phase 4, intent coverage verification, findings-to-recommendations traceability gates, pressure pass requirements, stall detection with recovery paths, and clear exit criteria for each phase. Feedback loops (validate → fix → retry) are built into the readiness gate and stall detection mechanisms.

3 / 3

Progressive Disclosure

The entire skill is a monolithic wall of text with no bundle files to offload detailed schemas, templates, or reference tables. The JSON schemas, markdown templates, dimension mappings, error handling tables, and recording protocols could all be split into separate reference files. Everything is inline, making the skill extremely long and hard to navigate despite having internal anchor links.

1 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (889 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
catlog22/Claude-Code-Workflow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.