Classify a problem using Cynefin triangulation before acting — routes to the right skill chain (investigate, brainstorm, probe, troubleshoot).
89
89%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Sense-make → triangulate → decompose if needed → route. Domain determines agent pattern, not just skill.
Framing: $ARGUMENTS
CRITICAL: After EVERY AskUserQuestion call, check if answers are empty/blank. Known Claude Code bug: outside Plan Mode, AskUserQuestion silently returns empty answers without showing UI.
If answers are empty: DO NOT proceed with assumptions. Instead:
Read $ARGUMENTS. Attempt domain classification using constraint language.
If confidence ≥80%: Propose — but ALWAYS run the Adjacent Domain Challenge before confirming:
🎯 Auto-classified: [Domain] (constraint: [type])
→ Verb: [probe|analyze|execute|act|decompose]
→ Suggested route: [skill chain]
⚖️ Adjacent challenge: What if this is actually [nearest domain]?
[1-2 sentence argument for why it could be the adjacent domain]
[Why the original classification still holds — or doesn't]
Confirm? [Yes / Re-classify manually]LLM bias warning: You are systematically biased toward Complicated (you have "expert knowledge" for everything, so you see governing constraints everywhere). When auto-classifying as Complicated, actively look for signs it might be Complex: Would two experts disagree? Is there genuine novelty? Has this specific combination been tried before?
If confidence <80% or no $ARGUMENTS: Skip to Step 1 (triangulation).
Do NOT ask user to self-classify by constraint type — people systematically misclassify. Instead, ask 3 concrete questions they CAN answer accurately.
Question Refinement: If $ARGUMENTS is vague or broad, generate 2-3 clarifying sub-questions to sharpen the problem statement. Present them inline before proceeding.
AskUserQuestion — all applicable questions in one call:
T1 — "Who's done this before?" (Keogh Scale)
T2 — "Same inputs, same result?" (Predictability)
T3 — "Can you take it apart?" (Disassembly)
Also ask:
Q-Scale (skip if Chaotic):
Q-Complicated sub (only if T1=2 AND T2=ordered):
investigatetroubleshootinvestigate with troubleshoot sub-taskAll 3 agree → High confidence. Classify directly.
2 of 3 agree → Classify by majority. Note the dissenting signal — it may indicate a liminal (boundary) state. Present:
🎯 [Domain] (2/3 tests agree)
⚠️ Liminal signal: T[N] suggests [adjacent domain] — [what this means]All 3 disagree or T3=composite → Problem spans multiple domains. Go to Step 1.5.
Misclassification traps to watch for:
When triangulation doesn't converge, the problem is too coarse. Snowden's rule: "If you can't agree on it, break it down until you can."
🧩 Composite problem — sub-parts in different domains:
├── [sub-problem 1]: [Domain] → [verb] → [skill]
├── [sub-problem 2]: [Domain] → [verb] → [skill]
└── [sub-problem 3]: [Domain] → [verb] → [skill]
Suggested sequence: [order based on dependencies + risk]
Start with [highest-risk/Complex parts first — that's where value and risk concentrate]AskUserQuestion: "Does this decomposition match your understanding? Adjust / Confirm / Re-frame"
Map triangulation result → domain → verb → skill chain:
| Domain | Constraint | Verb | Scale | Route | OpenSpec? |
|---|---|---|---|---|---|
| Clear | Rigid | execute | Pebble | Just code it | No |
| Clear | Rigid | execute | Boulder | openspec-develop directly | Yes |
| Complicated | Governing/Evolving | analyze | Any | investigate → openspec-plan | Boulder: yes |
| Complicated | Governing/Degraded | analyze | Any | troubleshoot → stabilize → re-frame | No |
| Complicated | Governing/Both | analyze | Any | investigate + troubleshoot sub-task | Boulder: yes |
| Complex | Enabling/no hypothesis | probe | Any | brainstorm → probe → openspec-plan | Yes |
| Complex | Enabling/has hypothesis | probe | Any | probe → sense → openspec-plan | Yes |
| Liminal Comp↔Complex | Mixed | probe+analyze | Any | probe first (resolve boundary) → re-frame | No |
| Chaotic | Absent | act | — | experiment → stabilize → frame-problem | No |
| Confused | Unknown | decompose | Any | Step 1.5 if not done, else ask user for more context | — |
| Composite | Mixed | per sub-problem | Mixed | Parallel/sequential per domain map from 1.5 | Per part |
For single-domain result, present:
🎯 [Domain] → [Verb] → [skill chain] | OpenSpec: [yes/no] | Scale: [boulder/pebble]
For composite result, present the domain map from Step 1.5 with full routing.
AskUserQuestion "Proceed?": Start chain / Re-frame / Skip framing.
On confirm → invoke first skill with $ARGUMENTS (or first sub-problem for composite).
troubleshoot.Reframing a vague bug report:
# Input: "The dashboard is slow"
# Skill probes: What is slow? Under what conditions? For which users?
# Output: "Dashboard queries >10s for accounts with >1000 records (Complicated domain)"Classifying a new feature request:
# Input: "Add AI recommendations to the product page"
# Skill classifies: Complex domain (unknown user behavior, emergent)
# Routes to: probe skill for safe-to-fail experiment designDecomposing a composite problem:
# Input: "Migrate our monolith to microservices and fix the checkout bug"
# Skill decomposes:
# - Checkout bug: Complicated/Degraded → troubleshoot
# - Monolith migration: Complex → brainstorm → probe → openspec-plan
# Routes to: troubleshoot first (lower risk, unblocks Complex work)