CtrlK
BlogDocsLog inGet started
Tessl Logo

recipe-diagnose

Investigate problem, verify findings, and derive solutions

44

Quality

31%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/recipe-diagnose/SKILL.md
SKILL.md
Quality
Evals
Security

Context: Diagnosis flow to identify root cause and present solutions

Target problem: $ARGUMENTS

Orchestrator Definition

Core Identity: "I am not a worker. I am an orchestrator."

Execution Method:

  • Investigation → performed by investigator
  • Verification → performed by verifier
  • Solution derivation → performed by solver

Orchestrator invokes sub-agents and passes structured JSON between them.

Task Registration: Register execution steps using TaskCreate and proceed systematically. Update status using TaskUpdate.

Step 0: Problem Structuring (Before investigator invocation)

0.1 Problem Type Determination

TypeCriteria
Change FailureIndicates some change occurred before the problem appeared
New DiscoveryNo relation to changes is indicated

If uncertain, ask the user whether any changes were made right before the problem occurred.

0.2 Information Supplementation for Change Failures

If the following are unclear, ask with AskUserQuestion before proceeding:

  • What was changed (cause change)
  • What broke (affected area)
  • Relationship between both (shared components, etc.)

0.3 Problem Essence Understanding

Invoke rule-advisor via Agent tool:

subagent_type: rule-advisor
description: "Problem essence analysis"
prompt: Identify the essence and required rules for this problem: [Problem reported by user]

Confirm from rule-advisor output:

  • taskAnalysis.mainFocus: Primary focus of the problem
  • mandatoryChecks.taskEssence: Root problem beyond surface symptoms
  • selectedRules: Applicable rule sections
  • warningPatterns: Patterns to avoid

0.4 Reflecting in investigator Prompt

Include the following in investigator prompt:

  1. Problem essence (taskEssence)
  2. Key applicable rules summary (from selectedRules)
  3. Investigation focus (investigationFocus): Convert warningPatterns to "points prone to confusion or oversight in this investigation"
  4. For change failures, additionally include:
    • Detailed analysis of the change content
    • Commonalities between cause change and affected area
    • Determination of whether the change is a "correct fix" or "new bug" with comparison baseline selection

Diagnosis Flow Overview

Problem → investigator → verifier → solver ─┐
                 ↑                          │
                 └── confidence < high ─────┘
                      (max 2 iterations)

confidence=high reached → Report

Context Separation: Pass only structured JSON output to each step. Each step starts fresh with the JSON data only.

Execution Steps

Register the following using TaskCreate and execute:

Step 1: Investigation (investigator)

Agent tool invocation:

subagent_type: investigator
description: "Investigate problem"
prompt: |
  Comprehensively collect information related to the following phenomenon.

  Phenomenon: [Problem reported by user]

  Problem essence: [taskEssence from Step 0.3]
  Investigation focus: [investigationFocus from Step 0.4]

  [For change failures, additionally include:]
  Change details: [What was changed]
  Affected area: [What broke]
  Shared components: [Commonalities between cause and effect]

Expected output: Evidence matrix, comparison analysis results, causal tracking results, list of unexplored areas, investigation limitations

Step 2: Investigation Quality Check

Review investigation output:

Quality Check (verify JSON output contains the following):

  • comparisonAnalysis is present and normalImplementation is not null (comparison target found) OR explicitly recorded as "no working implementation found"
  • causalChain for each hypothesis reaches a stop condition (code change / design decision / external constraint)
  • causeCategory for each hypothesis is one of: typo / logic_error / missing_constraint / design_gap / external_factor
  • investigationSources covers at least 3 distinct source types (code, history, dependency, config, document, external)
  • Investigation covers investigationFocus items (when provided in Step 0.4)
  • Each hypothesis has at least 1 entry in supportingEvidence with a source field citing a specific file or location

If quality insufficient: Re-run investigator specifying missing items explicitly:

prompt: |
  Re-investigate with focus on the following gaps:
  - Missing: [list specific missing items from quality check]

  Previous investigation results (for context, do not re-investigate covered areas):
  [Previous investigation JSON]

design_gap Escalation:

When investigator output contains causeCategory: design_gap or recurrenceRisk: high:

  1. Insert user confirmation before verifier execution
  2. Use AskUserQuestion: "A design-level issue was detected. How should we proceed?"
    • A: Attempt fix within current design
    • B: Include design reconsideration
  3. If user selects B, pass includeRedesign: true to solver

Proceed to verifier once quality is satisfied.

Step 3: Verification (verifier)

Agent tool invocation:

subagent_type: verifier
description: "Verify investigation results"
prompt: Verify the following investigation results.

Investigation results: [Investigation JSON output]

Expected output: Alternative hypotheses (at least 3), Devil's Advocate evaluation, final conclusion, confidence

Confidence Criteria:

  • high: No uncertainty affecting solution selection or implementation
  • medium: Uncertainty exists but resolvable with additional investigation
  • low: Fundamental information gap exists

Step 4: Solution Derivation (solver)

Agent tool invocation:

subagent_type: solver
description: "Derive solutions"
prompt: Derive solutions based on the following verified conclusion.

Causes: [verifier's conclusion.causes]
Causes relationship: [causesRelationship: independent/dependent/exclusive]
Confidence: [high/medium/low]

Expected output: Multiple solutions (at least 3), tradeoff analysis, recommendation and implementation steps, residual risks

Completion condition: confidence=high

When not reached:

  1. Return to Step 1 with uncertainties identified by solver as investigation targets
  2. Maximum 2 additional investigation iterations
  3. After 2 iterations without reaching high, present user with options:
    • Continue additional investigation
    • Execute solution at current confidence level

Step 5: Final Report Creation

Prerequisite: confidence=high achieved

After diagnosis completion, report to user in the following format:

## Diagnosis Result Summary

### Identified Causes
[Cause list from verification results]
- Causes relationship: [independent/dependent/exclusive]

### Verification Process
- Investigation scope: [Scope confirmed in investigation]
- Additional investigation iterations: [0/1/2]
- Alternative hypotheses count: [Number generated in verification]

### Recommended Solution
[Solution derivation recommendation]

Rationale: [Selection rationale]

### Implementation Steps
1. [Step 1]
2. [Step 2]
...

### Alternatives
[Alternative description]

### Residual Risks
[solver's residualRisks]

### Post-Resolution Verification Items
- [Verification item 1]
- [Verification item 2]

Completion Criteria

  • Executed investigator and obtained evidence matrix, comparison analysis, and causal tracking
  • Performed investigation quality check and re-ran if insufficient
  • Executed verifier and obtained confidence level
  • Executed solver
  • Achieved confidence=high (or obtained user approval after 2 additional iterations)
  • Presented final report to user
Repository
shinpr/claude-code-workflows
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.