CtrlK
BlogDocsLog inGet started
Tessl Logo

inquire

Infer context insufficiency before execution. Surfaces uncertainties through information-gain prioritized inquiry when AI infers areas of context insufficiency, producing informed execution. Type: (ContextInsufficient, AI, INQUIRE, ExecutionPlan) → InformedExecution. Alias: Aitesis(αἴτησις).

Install with Tessl CLI

npx tessl i github:jongwony/epistemic-protocols --skill inquire
What are skills?

34

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Aitesis Protocol

Infer context insufficiency before execution through AI-guided inquiry. Type: (ContextInsufficient, AI, INQUIRE, ExecutionPlan) → InformedExecution.

Definition

Aitesis (αἴτησις): A dialogical act of proactively inferring context sufficiency before execution, where AI identifies uncertainties, collects contextual evidence via codebase exploration to enrich question quality, and inquires about remaining uncertainties through information-gain prioritized mini-choices for user resolution.

── FLOW ──
Aitesis(X) → Scan(X) → Uᵢ → Ctx(Uᵢ) → (Uᵢ', Uᵣ) → Q(Uᵢ', priority) → A → X' → (loop until informed)

── MORPHISM ──
ExecutionPlan
  → scan(plan, context)                -- infer context insufficiency
  → collect(uncertainties, codebase)   -- enrich via evidence collection
  → surface(uncertainty, as_inquiry)   -- present highest-gain uncertainty
  → integrate(answer, plan)            -- update execution plan
  → InformedExecution
requires: uncertain(sufficiency(X))      -- runtime gate (Phase 0)
deficit:  ContextInsufficient            -- activation precondition (Layer 1/2)
preserves: task_identity(X)              -- task intent invariant; plan context mutated (X → X')
invariant: Inference over Detection

── TYPES ──
X        = Execution plan (current task/action about to execute)
Scan     = Context sufficiency scan: X → Set(Uncertainty)
Uncertainty = { domain: String, description: String, context: Set(Evidence) }
Evidence = { source: String, content: String }                -- collected during Ctx
Priority ∈ {Critical, Significant, Marginal}
Uᵢ       = Identified uncertainties from Scan(X)
Ctx      = Context collection: Uᵢ → (Uᵢ', Uᵣ)
Uᵢ'      = Enriched uncertainties (evidence added, not resolved)
Uᵣ       = Context-resolved uncertainties (resolved during collection)
Q        = Inquiry (AskUserQuestion), ordered by information gain
A        = User answer ∈ {Provide(context), Point(location), Dismiss}
X'       = Updated execution plan
InformedExecution = X' where remaining = ∅ ∨ user_esc

── PHASE TRANSITIONS ──
Phase 0: X → Scan(X) → Uᵢ?                                     -- context sufficiency gate (silent)
Phase 1: Uᵢ → Ctx(Uᵢ) → (Uᵢ', Uᵣ)                             -- context collection [Tool]
Phase 2: Uᵢ' → Q[AskUserQuestion](Uᵢ'[max_gain], progress) → A  -- uncertainty surfacing [Tool]
Phase 3: A → integrate(A, X) → X'                               -- plan update (internal)

── LOOP ──
After Phase 3: re-scan X' for remaining or newly emerged uncertainties.
New uncertainties accumulate into uncertainties (cumulative, never replace).
If Uᵢ' remains: return to Phase 1 (collect context for new uncertainties).
If remaining = ∅: proceed with execution.
User can exit at Phase 2 (early_exit).
Continue until: informed(X') OR user ESC.

── CONVERGENCE ──
informed(X') = remaining = ∅
progress(Λ) = 1 - |remaining| / |uncertainties|
narrowing(Q, A) = |remaining(after)| < |remaining(before)| ∨ context(remaining(after)) ⊃ context(remaining(before))
early_exit = user_declares_sufficient

── TOOL GROUNDING ──
Phase 1 Ctx  (collect)  → Read, Grep (context collection)
Phase 2 Q    (extern)   → AskUserQuestion (mandatory; Esc key → loop termination at LOOP level, not an Answer)
Phase 3      (state)    → Internal state update
Phase 0 Scan (infer)    → Internal analysis (no external tool)

── MODE STATE ──
Λ = { phase: Phase, X: ExecutionPlan, uncertainties: Set(Uncertainty),
      context_resolved: Set(Uncertainty), user_resolved: Set(Uncertainty),
      remaining: Set(Uncertainty), dismissed: Set(Uncertainty),
      history: List<(Uncertainty, A)>, active: Bool,
      cause_tag: String }
-- Invariant: uncertainties = context_resolved ∪ user_resolved ∪ remaining ∪ dismissed (pairwise disjoint)

Core Principle

Inference over Detection: When AI infers context insufficiency before execution, it first collects contextual evidence via codebase exploration to enrich question quality, then inquires about remaining uncertainties through information-gain prioritized mini-choices rather than assuming defaults or proceeding silently. The purpose of context collection is to ask better questions, not to eliminate them.

Distinction from Other Protocols

ProtocolInitiatorDeficit → ResolutionFocus
ProthesisAI-guidedFrameworkAbsent → FramedInquiryPerspective selection
SyneidesisAI-guidedGapUnnoticed → AuditedDecisionDecision-point gaps
HermeneiaHybridIntentMisarticulated → ClarifiedIntentExpression clarification
TelosAI-guidedGoalIndeterminate → DefinedEndStateGoal co-construction
AitesisAI-guidedContextInsufficient → InformedExecutionPre-execution context inference
EpitropeAI-guidedDelegationAmbiguous → CalibratedDelegationDelegation calibration
AnalogiaAI-guidedMappingUncertain → ValidatedMappingAbstract-concrete mapping validation
ProsocheUser-initiatedExecutionBlind → SituatedExecutionExecution-time risk evaluation
EpharmogeAI-guidedApplicationDecontextualized → ContextualizedExecutionPost-execution applicability
KatalepsisUser-initiatedResultUngrasped → VerifiedUnderstandingComprehension verification

Key differences:

  • Syneidesis surfaces gaps at decision points for the user to judge (information flows AI→user) — Aitesis infers context the AI lacks before execution (information flows user→AI)
  • Telos co-constructs goals when intent is indeterminate — Aitesis operates when goals exist but execution context is insufficient
  • Hermeneia extracts intent the user already has (user signal) or detects expression ambiguity (AI-detected, requires confirmation) — Aitesis infers what context the system lacks
  • Epitrope calibrates delegation (structure, scope, autonomy) before work begins — Aitesis verifies execution context after delegation is established

Heterocognitive distinction: Aitesis monitors the AI's own context sufficiency (heterocognitive — "do I have enough context to execute?"), while Syneidesis monitors the user's decision quality (metacognitive — "has the user considered all angles?"). The operational test: if the information gap would be filled by the user providing context, it's Aitesis; if it would be filled by the user reconsidering their decision, it's Syneidesis.

Factual vs evaluative: Aitesis uncertainties are factual — they have objectively correct answers discoverable from the environment (configs, versions, schemas). Syneidesis gaps are evaluative — they require judgment about trade-offs and consequences. This is why Phase 1 context collection exists: factual uncertainties may be partially resolved or enriched from the codebase. Evaluative gaps cannot be self-resolved.

Litmus-test examples (same scenario, different classification):

  • Aitesis: "The codebase has both v1 and v2 API schemas — which version is the current production target?" (AI lacks a fact)
  • Syneidesis: "Have you considered that migrating from v1 to v2 will require a data backfill?" (user has not considered a consequence)

Mode Activation

Activation

AI infers context insufficiency before execution OR user calls /inquire. Inference is silent (Phase 0); surfacing always requires user interaction via AskUserQuestion (Phase 2).

Activation layers:

  • Layer 1 (User-invocable): /inquire slash command or description-matching input. Always available.
  • Layer 2 (AI-guided): Context insufficiency inferred before execution via in-protocol heuristics. Inference is silent (Phase 0).

Context insufficient = the execution plan contains requirements not available in the current context and not trivially inferrable.

Gate predicate:

uncertain(sufficiency(X)) ≡ ∃ requirement(r, X) : ¬available(r, context) ∧ ¬trivially_inferrable(r)

Priority

<system-reminder> When Aitesis is active:

Supersedes: Direct execution patterns in loaded instructions (Context must be verified before any execution begins)

Retained: Safety boundaries, tool restrictions, user explicit instructions

Action: At Phase 2, call AskUserQuestion tool to present highest information-gain uncertainty candidate for user resolution. </system-reminder>

  • Aitesis completes before execution proceeds
  • Loaded instructions resume after context is resolved or dismissed

Protocol precedence: Default ordering places Aitesis after Epitrope and before Prothesis (Hermeneia → Telos → Epitrope → Aitesis → Prothesis → Analogia → Syneidesis → Prosoche → Epharmoge; calibrated delegation before context verification, verified context before perspective selection and mapping validation). The user can override this default by explicitly requesting a different protocol first. Katalepsis is structurally last — it requires completed AI work (R), so it is not subject to ordering choices.

Trigger Signals

Heuristic signals for context insufficiency inference (not hard gates):

SignalInference
Novel domainKnowledge area not previously addressed in session
Implicit requirementsTask carries unstated assumptions
Ambiguous scopeMultiple valid interpretations exist and AI cannot determine intended approach from available context
Environmental dependencyRelies on external state (configs, APIs, versions)

Skip:

  • Execution context is fully specified in current message
  • User explicitly says "just do it" or "proceed"
  • Same (domain, description) pair was dismissed in current session (session immunity)
  • Phase 1 context collection resolves all identified uncertainties
  • Read-only / exploratory task — no execution plan to verify

Mode Deactivation

TriggerEffect
All uncertainties resolved (context or user)Proceed with updated execution plan
All remaining uncertainties dismissedProceed with original execution plan + defaults
User Esc keyReturn to normal operation

Uncertainty Identification

Uncertainties are identified dynamically per task — no fixed taxonomy. Each uncertainty is characterized by:

  • domain: The knowledge area where context is missing (e.g., "deployment config", "API versioning", "user auth model")
  • description: What specifically is missing or uncertain
  • context: Evidence collected during Phase 1 that enriches question quality

Priority

Priority reflects information gain — how much resolving this uncertainty would narrow the remaining uncertainty space.

LevelCriterionAction
CriticalResolution maximally narrows remaining uncertainty spaceMust resolve before execution
SignificantResolution narrows uncertainty but alternatives partially compensateSurface to user for context
MarginalReasonable default exists; resolution provides incremental improvementSurface with pre-selected Dismiss option

Priority is relational, not intrinsic: the same uncertainty may be Critical in one context and Marginal in another, depending on what other uncertainties exist and what context is already available.

When multiple uncertainties are identified, surface in priority order (Critical → Significant → Marginal). Only one uncertainty surfaced per Phase 2 cycle.

Protocol

Phase 0: Context Sufficiency Gate (Silent)

Analyze execution plan requirements against available context. This phase is silent — no user interaction.

  1. Scan execution plan X for required context: domain knowledge, environmental state, configuration details, user preferences, constraints
  2. Check availability: For each requirement, assess whether it is available in conversation, files, or environment
  3. If all requirements satisfied: proceed with execution (Aitesis not activated)
  4. If uncertainties identified: record Uᵢ with domain, description — proceed to Phase 1

Scan scope: Current execution plan, conversation history, observable environment. Does NOT modify files or call external services.

Phase 1: Context Collection

Collect contextual evidence to enrich uncertainty descriptions and improve question quality before asking the user.

  1. For each uncertainty in Uᵢ:
    • Call Read/Grep to search for relevant information in codebase, configs, documentation
    • If definitive answer found: mark as context-resolved (Uᵣ), integrate into execution context
    • If partial evidence found: enrich uncertainty with collected evidence (Uᵢ'), retain for Phase 2
    • If conflicting evidence found: enrich uncertainty with conflicting findings (Uᵢ'), retain for Phase 2
    • If no evidence found: retain in Uᵢ' with empty context
  2. If all uncertainties context-resolved (Uᵢ' = ∅): proceed with execution (no user interruption)
  3. If enriched uncertainties remain (Uᵢ' ≠ ∅): proceed to Phase 2

Purpose shift: Context collection aims to ask better questions, not to eliminate them. Evidence enriches the uncertainty description presented in Phase 2, enabling the user to provide more targeted answers.

Scope restriction: Read-only investigation only. No API calls, test execution, or file modifications.

Phase 2: Uncertainty Surfacing

Call the AskUserQuestion tool to present the highest-priority remaining uncertainty.

Selection criterion: Choose the uncertainty whose resolution would maximally narrow the remaining uncertainty space (information gain). When priority is equal, prefer the uncertainty with richer collected context (more evidence to present).

Surfacing format:

Before proceeding, I need to verify some context:

[Specific uncertainty description]
[Evidence collected during context collection, if any]

Progress: [N resolved / M total uncertainties]

Options:
1. **[Provide X]** — [what this context enables]
2. **[Point me to...]** — tell me where to find this information
3. **Dismiss** — proceed with [stated default/assumption]

Design principles:

  • Context collection transparent: Show what evidence was collected and what remains uncertain
  • Progress visible: Display resolution progress across all identified uncertainties
  • Actionable options: Each option leads to a concrete next step
  • Dismiss with default: Always state what assumption will be used if dismissed

Phase 3: Plan Update

After user response:

  1. Provide(context): Integrate user-provided context into execution plan X'
  2. Point(location): Record location, resolve via next Phase 1 iteration
  3. Dismiss: Mark uncertainty as dismissed, note default assumption used

After integration:

  • Re-scan X' for remaining or newly emerged uncertainties
  • If uncertainties remain: return to Phase 1 (collect context for new uncertainties first)
  • If all resolved/dismissed: proceed with execution
  • Log (Uncertainty, A) to history

Intensity

LevelWhenFormat
LightMarginal priority uncertainties onlyAskUserQuestion with Dismiss as default option
MediumSignificant priority uncertainties, context collection partially resolvedStructured AskUserQuestion with progress
HeavyCritical priority, multiple unresolved uncertaintiesDetailed evidence + collection results + resolution paths

UX Safeguards

RuleStructureEffect
Gate specificityactivate(Aitesis) only if ∃ requirement(r) : ¬available(r) ∧ ¬trivially_inferrable(r)Prevents false activation on clear tasks
Context collection firstPhase 1 before Phase 2Enriches question quality before asking
Uncertainty capOne uncertainty per Phase 2 cycle, priority orderPrevents question overload
Session immunityDismissed (domain, description) → skip for sessionRespects user's dismissal
Progress visibility[N resolved / M total] in Phase 2User sees progress toward completion
Narrowing signalSignal when narrowing(Q, A) shows diminishing returnsUser can exit when remaining uncertainties are marginal
Early exitUser can declare sufficient at any Phase 2Full control over inquiry depth
Cross-protocol fatigueSyneidesis triggered → suppress Aitesis for same task scopePrevents protocol stacking (asymmetric: Aitesis context uncertainties ≠ Syneidesis decision gaps, so reverse suppression not needed)

Rules

  1. AI-guided, user-resolved: AI infers context insufficiency; resolution requires user choice via AskUserQuestion (Phase 2)
  2. Recognition over Recall: Always call AskUserQuestion tool to present structured options (text presentation = protocol violation)
  3. Context collection first: Before asking the user, collect contextual evidence through Read/Grep codebase exploration to enrich question quality (Phase 1)
  4. Inference over Detection: When context is insufficient and context collection does not fully resolve, infer the highest-gain question rather than assume — silence is worse than a dismissed question
  5. Open scan: No fixed uncertainty taxonomy — identify uncertainties dynamically based on execution plan requirements
  6. Evidence-grounded: Every surfaced uncertainty must cite specific observable evidence or collection results, not speculation
  7. One at a time: Surface one uncertainty per Phase 2 cycle; do not bundle multiple uncertainties
  8. Dismiss respected: User dismissal is final for that uncertainty domain in the current session
  9. Convergence persistence: Mode active until all identified uncertainties are resolved or dismissed
  10. Progress visibility: Every Phase 2 surfacing includes progress indicator [N resolved / M total]
  11. Early exit honored: When user declares context sufficient, accept immediately regardless of remaining uncertainties
  12. Cross-protocol awareness: Defer to Syneidesis when gap surfacing is already active for the same task scope
Repository
jongwony/epistemic-protocols
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.