Infer context insufficiency before execution. Surfaces uncertainties through information-gain prioritized inquiry when AI infers areas of context insufficiency, producing informed execution. Type: (ContextInsufficient, AI, INQUIRE, ExecutionPlan) → InformedExecution. Alias: Aitesis(αἴτησις).
Install with Tessl CLI
npx tessl i github:jongwony/epistemic-protocols --skill inquire34
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is written in an overly academic, abstract style that fails to communicate practical utility. It lacks concrete actions, natural trigger terms, and explicit usage guidance. The Greek alias and type signature notation suggest internal documentation rather than a user-facing skill description.
Suggestions
Rewrite using plain language describing concrete actions, e.g., 'Asks clarifying questions before executing complex tasks to ensure requirements are understood'
Add a 'Use when...' clause with natural triggers like 'when the user request is ambiguous', 'when requirements are unclear', or 'before starting complex multi-step tasks'
Remove technical notation (type signatures, Greek aliases) and replace with examples of when this skill should activate
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses abstract, academic language like 'context insufficiency', 'information-gain prioritized inquiry', and 'InformedExecution' without describing concrete actions a user would understand. No specific tasks or operations are listed. | 1 / 3 |
Completeness | While it vaguely describes 'what' (surfacing uncertainties through inquiry), there is no explicit 'when' clause or trigger guidance. The type signature format is not user-facing guidance for skill selection. | 1 / 3 |
Trigger Term Quality | Contains no natural keywords users would say. Terms like 'ContextInsufficient', 'Aitesis(αἴτησις)', and 'ExecutionPlan' are technical jargon that no user would naturally use when requesting help. | 1 / 3 |
Distinctiveness Conflict Risk | The concept of 'asking clarifying questions when context is insufficient' is somewhat distinct, but the abstract framing makes it unclear how this differs from general conversational behavior or other planning/reasoning skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill suffers from severe over-engineering, presenting what could be a simple 'ask clarifying questions before executing' pattern as an elaborate formal system with type theory notation. The content prioritizes theoretical completeness over practical usability, explaining concepts Claude already understands (like information gain prioritization) through unnecessary formalism. The core idea is sound but buried under excessive abstraction.
Suggestions
Reduce to ~50 lines: Remove formal notation (morphisms, type definitions), keep only the 4-phase workflow with one concrete example per phase
Add 2-3 concrete examples showing: (1) an actual uncertainty identified, (2) the AskUserQuestion tool call with real options, (3) how the answer integrates into the execution plan
Move the protocol comparison table and formal type definitions to a separate REFERENCE.md file, keeping only the key distinction (factual vs evaluative) inline
Replace the abstract 'Surfacing format' template with a complete, realistic example showing actual uncertainty text, collected evidence, and user options
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive formal notation, type theory, and mathematical definitions that Claude doesn't need explained. The 400+ lines could be reduced to ~50 lines of actionable guidance. Concepts like 'morphism', 'heterocognitive distinction', and extensive protocol comparison tables add cognitive overhead without proportional value. | 1 / 3 |
Actionability | The protocol phases and rules provide some concrete guidance, but the content is heavily abstract with formal notation rather than executable examples. The 'Surfacing format' template is helpful, but there are no concrete examples of actual uncertainties, user interactions, or tool calls in realistic scenarios. | 2 / 3 |
Workflow Clarity | The four phases (0-3) are clearly sequenced with transitions defined, and the loop structure is explicit. However, the formal notation obscures rather than clarifies the workflow. Missing concrete validation checkpoints - the 'narrowing signal' is mentioned but not operationalized with specific criteria. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. The entire protocol specification, comparison tables, UX safeguards, and rules are all inline. Content that could be split (protocol comparisons, formal type definitions, intensity levels) remains in a single overwhelming document. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.