CtrlK
BlogDocsLog inGet started
Tessl Logo

recipe-diagnose

Investigate problem, verify findings, and derive solutions

35

Quality

31%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/recipe-diagnose/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is far too vague and generic to be useful for skill selection. It reads like a universal problem-solving statement rather than a description of a specific skill, providing no concrete actions, no domain context, and no trigger guidance. It would be indistinguishable from dozens of other skills in a multi-skill environment.

Suggestions

Specify the domain and concrete actions — e.g., 'Debug Python runtime errors by analyzing stack traces, inspecting variable states, and suggesting code fixes' instead of generic 'investigate problem'.

Add an explicit 'Use when...' clause with natural trigger terms — e.g., 'Use when the user encounters bugs, errors, exceptions, or asks for help debugging code.'

Include distinguishing details that separate this skill from other problem-solving or analysis skills, such as the types of problems, tools, or file formats involved.

DimensionReasoningScore

Specificity

The description uses entirely vague, abstract language — 'investigate problem', 'verify findings', 'derive solutions' are generic actions that could apply to virtually any domain. No concrete actions or specific capabilities are listed.

1 / 3

Completeness

The description weakly addresses 'what' with vague language and completely omits 'when' — there is no 'Use when...' clause or any explicit trigger guidance.

1 / 3

Trigger Term Quality

The terms 'problem', 'findings', and 'solutions' are overly generic and not natural keywords a user would use to trigger a specific skill. There are no domain-specific or actionable trigger terms.

1 / 3

Distinctiveness Conflict Risk

This description is extremely generic and would conflict with nearly any problem-solving, debugging, analysis, or troubleshooting skill. There is nothing to distinguish it from other skills.

1 / 3

Total

4

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a well-structured diagnostic workflow with strong sequencing, validation checkpoints, and feedback loops. Its main weaknesses are the lack of concrete executable examples (relying on abstract sub-agent invocations and pseudo-templates) and moderate verbosity that could be tightened. The monolithic structure would benefit from splitting detailed templates and checklists into referenced files.

Suggestions

Provide a concrete JSON schema example for the expected output of each sub-agent (investigator, verifier, solver) instead of just listing field names abstractly.

Split the quality check checklist (Step 2) and final report template (Step 5) into separate referenced files to improve progressive disclosure and reduce the main file length.

Tighten the 'Orchestrator Definition' section — the 'Core Identity' quote and 'Execution Method' bullet list restate what the workflow already makes clear and could be reduced to a single sentence.

DimensionReasoningScore

Conciseness

The skill is moderately efficient but includes some verbose structural elements (e.g., the orchestrator identity declaration, repeated JSON prompt templates that could be more compact). The table for problem type determination and the checklist format add useful structure but some sections like 'Core Identity' and 'Execution Method' explain orchestration concepts that could be more terse.

2 / 3

Actionability

The skill provides structured prompts and checklists, but relies heavily on abstract sub-agent invocations (investigator, verifier, solver, rule-advisor) without concrete executable code or commands. The agent tool invocations use pseudo-YAML prompt templates rather than actual executable examples, and expected outputs are described abstractly rather than with concrete JSON schemas.

2 / 3

Workflow Clarity

The multi-step workflow is clearly sequenced (Steps 0-5) with explicit validation checkpoints (Step 2 quality check with specific checklist items), feedback loops (re-run investigator if quality insufficient, max 2 iterations), coverage criteria definitions, and clear escalation paths (design_gap escalation, user approval after iteration limits). The ASCII flow diagram provides a good overview.

3 / 3

Progressive Disclosure

The content is a single monolithic file with no references to supporting documents despite being quite long (~200+ lines). The report template, quality check criteria, and sub-agent prompt templates could be split into separate reference files. However, the internal organization with clear headers and numbered steps provides reasonable navigability within the single file.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
shinpr/claude-code-workflows
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.