CtrlK
BlogDocsLog inGet started
Tessl Logo

recipe-diagnose

Investigate problem, verify findings, and derive solutions

44

Quality

31%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/recipe-diagnose/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is extremely vague and provides no concrete information about what domain, tools, or specific tasks the skill covers. It reads like a generic problem-solving statement that could apply to any skill, making it nearly impossible for Claude to correctly select it from a pool of available skills. It lacks trigger terms, explicit 'when to use' guidance, and any distinguishing characteristics.

Suggestions

Specify the domain and concrete actions — e.g., instead of 'investigate problem', state what kind of problems (debugging code errors, diagnosing network issues, analyzing data anomalies) and what specific techniques or tools are used.

Add an explicit 'Use when...' clause with natural trigger terms a user would say, such as 'Use when the user reports a bug, asks to debug code, or needs root cause analysis of an error.'

Include distinguishing details that separate this skill from other analytical or troubleshooting skills, such as specific file types, technologies, or methodologies involved.

DimensionReasoningScore

Specificity

The description uses entirely vague, abstract language — 'investigate problem', 'verify findings', 'derive solutions' are generic actions that could apply to virtually any domain. No concrete actions or specific capabilities are listed.

1 / 3

Completeness

The description weakly addresses 'what' with vague verbs and completely omits any 'when' guidance. There is no 'Use when...' clause or equivalent explicit trigger guidance.

1 / 3

Trigger Term Quality

The terms 'problem', 'findings', and 'solutions' are extremely generic and not natural trigger keywords a user would use to invoke a specific skill. There are no domain-specific or actionable keywords.

1 / 3

Distinctiveness Conflict Risk

This description is so generic it would conflict with nearly any problem-solving, debugging, analysis, or troubleshooting skill. There is nothing to distinguish it from other skills.

1 / 3

Total

4

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill defines a sophisticated multi-step diagnostic workflow with strong workflow clarity, including explicit validation checkpoints, feedback loops, and escalation paths. Its main weaknesses are that the sub-agent invocations and task management tools are abstract rather than grounded in specific executable APIs, reducing actionability. The content is moderately concise but could be tightened, and the monolithic structure would benefit from progressive disclosure via external reference files.

Suggestions

Ground the sub-agent invocations in specific, executable tool calls — define the exact tool name, parameter schema, and expected response format for 'Agent tool', 'TaskCreate', 'TaskUpdate', and 'AskUserQuestion' so Claude knows precisely how to invoke them.

Extract the final report template and the quality check checklist into separate referenced files to reduce the main skill's length and improve progressive disclosure.

Remove or condense the 'Orchestrator Definition' section — the identity statement and execution method summary are redundant given the detailed steps that follow.

DimensionReasoningScore

Conciseness

The skill is fairly lengthy and includes some structural overhead that could be tightened (e.g., the orchestrator identity statement, verbose table for problem types). However, most content is procedural and necessary for the complex multi-step workflow, so it's not egregiously padded.

2 / 3

Actionability

The skill provides structured prompts and JSON field names to check, which is somewhat concrete. However, it relies on abstract sub-agent invocations (investigator, verifier, solver, rule-advisor) without defining what tools actually exist or how to invoke them precisely. The 'Agent tool' and 'TaskCreate/TaskUpdate' references are not grounded in any specific API or executable commands.

2 / 3

Workflow Clarity

The multi-step workflow is clearly sequenced with an explicit flow diagram, quality check checklists between steps, feedback loops (confidence < high → re-investigate, max 2 iterations), escalation paths for design gaps, and clear completion criteria. Validation checkpoints are explicit and well-defined.

3 / 3

Progressive Disclosure

The content is entirely self-contained in one file with no references to external documents for detailed sub-topics. While the structure uses headers and steps well, the monolithic nature means all detail levels are inline. For a skill this complex, splitting sub-agent prompt templates or the report format into separate files would improve organization.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
shinpr/claude-code-workflows
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.