CtrlK
BlogDocsLog inGet started
Tessl Logo

inquire

Infer context insufficiency before execution. Surfaces uncertainties through information-gain prioritized inquiry when AI infers areas of context insufficiency, producing informed execution. Type: (ContextInsufficient, AI, INQUIRE, ExecutionPlan) → InformedExecution. Alias: Aitesis(αἴτησις).

Install with Tessl CLI

npx tessl i github:jongwony/epistemic-protocols --skill inquire
What are skills?

34

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Evaluation results

69%

4%

Notification Service Integration: Pre-Implementation Review

Phase 2 question format and protocol structure

Criteria
Without context
With context

Priority classification

100%

100%

Sequential presentation

100%

100%

Progress indicator

0%

0%

Three-option format

0%

0%

Dismiss with stated default

70%

80%

Evidence citation

100%

100%

Information-gain ordering

100%

100%

No scanning trace in output

100%

100%

Factual uncertainties only

37%

62%

Actionable options

37%

50%

Minimum question count

100%

100%

Relational priority justification

50%

50%

Without context: $0.2444 · 1m 42s · 9 turns · 14 in / 5,001 out tokens

With context: $0.4935 · 2m 40s · 18 turns · 299 in / 7,618 out tokens

100%

8%

Developer Guide: When to Run Pre-Execution Inquiry

Protocol activation decisions and skip conditions

Criteria
Without context
With context

Task A skip: read-only

100%

100%

Task C skip: explicit proceed

100%

100%

Task E: context-collectible

100%

100%

Task B triggers inquiry

100%

100%

Task D triggers inquiry

100%

100%

Inference-based reasoning

80%

100%

Gate predicate applied

100%

100%

First question format

60%

100%

Session immunity explained

75%

100%

Specificity of skip reasoning

100%

100%

Without context: $0.2449 · 1m 38s · 11 turns · 16 in / 4,541 out tokens

With context: $0.4815 · 2m 17s · 21 turns · 271 in / 6,085 out tokens

95%

17%

Order Service Migration: Decision Log

Multi-round uncertainty accumulation and session convergence

Criteria
Without context
With context

Multiple rounds

100%

100%

Cumulative accumulation

50%

83%

Context-resolved shown separately

100%

100%

Phase 1 read-only methods

100%

100%

Re-scan after integration

100%

100%

Convergence reached

100%

100%

Evidence cited in questions

100%

100%

Progress indicator updated

20%

70%

Plan evolution shown

100%

100%

Dismiss option present

0%

100%

One question per round

100%

100%

Without context: $0.2227 · 1m 48s · 8 turns · 13 in / 5,362 out tokens

With context: $0.6902 · 4m 33s · 18 turns · 426 in / 12,762 out tokens

Evaluated
Agent
Claude Code

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.