CtrlK
BlogDocsLog inGet started
Tessl Logo

pantheon-ai/frame-problem

Classify a problem using Cynefin triangulation before acting — routes to the right skill chain (investigate, brainstorm, probe, troubleshoot).

89

Quality

89%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

frame-problem-to-probe-llm.mdreferences/

domain:
complex
verb:
probe
constraint-type:
enabling
problem:
{problem statement from classification}
scale:
{boulder|pebble}

Frame-Problem to Probe Handoff

Thinking Trail

  • Considered: {what approaches were weighed during framing}
  • Rejected: {why other domains didn't fit}
  • Surprised by: {unexpected signals}
  • Triangulation: T1={Keogh level}, T2={predictability}, T3={disassembly} — {convergence}
  • Models used: Cynefin triangulation (Keogh + Predictability + Disassembly)
  • Constraints discovered: {enabling constraints — cause-effect unclear but hypothesis exists}

Decisions

  1. Domain: Complex (enabling constraints, user has hypothesis)
  2. Route: probe → sense → openspec-plan
  3. Rationale: {hypothesis already exists — skip brainstorm, go direct to probe}

Actions Taken

  • Classified via Q1: enabling constraints identified
  • Hypothesis detected in $ARGUMENTS
  • Scale determined: {boulder|pebble}

Output

🎯 Complex → Probe → /probe → /openspec-plan | OpenSpec: yes | Scale: {scale}

Domain Transition

From: Confused → To: Complex Constraint shift: unknown → enabling. User has a hypothesis ready to test — direct probe without divergent brainstorm.

For /probe

  • Hypothesis: {extracted or user-stated hypothesis}
  • Enabling constraints: {what bounds this probe — codebase scope, time, tooling}
  • Suggested confirm/refute criteria: {what would confirm or refute the hypothesis}
  • Prior exploration: None (first probe) or {summary of brainstorm output if chained}
  • Do NOT: Skip Phase 1 entry gate — qualify even if hypothesis looks complete

Accumulated Context

Token guidance: target 300 tokens inline. For depth, use references — point to thinking artifact files rather than embedding full content. Soft cap: 600 tokens inline per handoff. If you need more, move detail to a knowledge file and reference it. Accumulated cap: 800 tokens across a chain — compress to 200 at cap (keep: decisions, constraints, rejected paths). References to thinking files do NOT count toward the cap.

references

frame-problem-to-brainstorm-llm.md

frame-problem-to-experiment-llm.md

frame-problem-to-investigate-llm.md

frame-problem-to-probe-liminal-llm.md

frame-problem-to-probe-llm.md

frame-problem-to-troubleshoot-llm.md

SKILL.md

tile.json