CtrlK
BlogDocsLog inGet started
Tessl Logo

pantheon-ai/frame-problem

Classify a problem using Cynefin triangulation before acting — routes to the right skill chain (investigate, brainstorm, probe, troubleshoot).

89

Quality

89%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

frame-problem-to-brainstorm-llm.mdreferences/

domain:
complex
verb:
probe
constraint-type:
enabling
problem:
{problem statement from classification}
scale:
{boulder|pebble}

Frame-Problem to Brainstorm Handoff

Thinking Trail

  • Considered: {what approaches were weighed during framing}
  • Rejected: {why other domains didn't fit — prevents brainstorm re-exploring}
  • Surprised by: {unexpected signals in the problem statement}
  • Triangulation: T1={Keogh level}, T2={predictability}, T3={disassembly} — {convergence}
  • Models used: Cynefin triangulation (Keogh + Predictability + Disassembly)
  • Constraints discovered: {enabling constraints identified — cause-effect unclear}

Decisions

  1. Domain: Complex (enabling constraints — experimentation needed)
  2. Route: brainstorm → probe → openspec-plan
  3. Rationale: {why divergent exploration before probing}

Actions Taken

  • Classified via Q1: enabling constraints identified
  • Scale determined: {boulder|pebble}

Output

🎯 Complex → Probe → /brainstorm → /probe | OpenSpec: yes | Scale: {scale}

Domain Transition

From: Confused → To: Complex Constraint shift: unknown → enabling. Problem has no clear cause-effect; need divergent exploration to surface hypotheses before probing.

For /brainstorm

  • Goal: Generate 3-5 hypotheses about what's happening and why
  • Constraint: Each hypothesis must be falsifiable — probe needs testable claims
  • Output format: Ranked hypotheses with confidence levels and suggested probe approaches
  • Do NOT: Converge to a solution — brainstorm feeds probe, not implementation

Accumulated Context

Token guidance: target 300 tokens inline. For depth, use references — point to thinking artifact files rather than embedding full content. Soft cap: 600 tokens inline per handoff. If you need more, move detail to a knowledge file and reference it. Accumulated cap: 800 tokens across a chain — compress to 200 at cap (keep: decisions, constraints, rejected paths). References to thinking files do NOT count toward the cap.

references

frame-problem-to-brainstorm-llm.md

frame-problem-to-experiment-llm.md

frame-problem-to-investigate-llm.md

frame-problem-to-probe-liminal-llm.md

frame-problem-to-probe-llm.md

frame-problem-to-troubleshoot-llm.md

SKILL.md

tile.json