Detect application-context mismatch after execution. Verifies applicability when correct output may not fit the actual context, producing contextualized execution. Type: (ApplicationDecontextualized, AI, CONTEXTUALIZE, ExecutionResult) → ContextualizedExecution. Alias: Epharmoge(ἐφαρμογή).
Install with Tessl CLI
npx tessl i github:jongwony/epistemic-protocols --skill contextualize42
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Detect application-context mismatch after execution through AI-guided applicability verification, where correct results that may not fit the actual context are surfaced for user judgment. Type: (ApplicationDecontextualized, AI, CONTEXTUALIZE, ExecutionResult) → ContextualizedExecution.
Epharmoge (ἐφαρμογή): A dialogical act of verifying that AI-produced results fit the actual application context — from Aristotle's notion of practical application — resolving the gap between technical correctness and contextual appropriateness through structured mismatch surfacing and user-directed adaptation.
── FLOW ──
Epharmoge(R, X) → Eval(R, X) → Mᵢ? → Q → A → R' → (loop until contextualized)
── MORPHISM ──
(R, X)
→ evaluate(result, context) -- detect applicability mismatch
→ surface(mismatch, as_inquiry) -- present mismatch with evidence
→ adapt(result, direction) -- adapt result to context
→ ContextualizedExecution
requires: mismatch_detected(R, X) -- runtime gate (Phase 0)
deficit: ApplicationDecontextualized -- activation precondition (Layer 1/2)
preserves: X -- application context is fixed reference; morphism transforms R only
invariant: Applicability over Correctness
── TYPES ──
R = Execution result (AI's completed work output)
X = Application context (environment, constraints, user situation)
Eval = Applicability evaluation: (R, X) → Set(Mismatch)
Mismatch = { aspect: String, description: String, evidence: String, severity: Severity }
Severity ∈ {Critical, Significant, Minor}
Mᵢ = Identified mismatches from Eval(R, X)
Q = Applicability inquiry (AskUserQuestion)
A = User answer ∈ {Confirm(mismatch), Adapt(direction), Dismiss}
R' = Adapted result (contextualized output)
ContextualizedExecution = R' where applicable(R', X) ∨ user_esc
── PHASE TRANSITIONS ──
Phase 0: R → Eval(R, X) → Mᵢ? -- applicability gate (silent)
Phase 1: Mᵢ → Q[AskUserQuestion](Mᵢ[0], evidence) → A -- mismatch surfacing [Tool]
Phase 2: A → adapt(A, R) → R' -- result adaptation [Tool]
── LOOP ──
After Phase 2: re-evaluate R' against X for remaining mismatches.
If Mᵢ remains: return to Phase 1.
If applicable(R', X): execution complete.
User can exit at Phase 1 (early_exit).
Continue until: contextualized(R') OR user ESC.
── CONVERGENCE ──
applicable(R', X) = ∀ aspect(a, R', X) : warranted(a, R', X)
warranted(a, R, X) = correct(R) ∧ fits(R, X) -- correctness AND contextual fit required (not material conditional)
contextualized(R') = applicable(R', X) ∨ user_esc
progress(Λ) = 1 - |remaining| / |mismatches|
── TOOL GROUNDING ──
Phase 0 Eval (detect) → Internal analysis (no external tool)
Phase 1 Q (extern) → AskUserQuestion (mandatory; Esc key → loop termination at LOOP level, not an Answer)
Phase 2 adapt (modify) → Edit, Write (result adaptation based on user direction)
-- (modify): tool call that changes existing artifacts (distinct from (extern) user-facing, (detect) read-only, (state) internal)
── MODE STATE ──
Λ = { phase: Phase, R: Result, X: Context,
mismatches: Set(Mismatch), confirmed: Set(Mismatch),
adapted: Set(Mismatch), dismissed: Set(Mismatch),
remaining: Set(Mismatch),
history: List<(Mismatch, A)>, active: Bool,
cause_tag: String }
-- Invariant: mismatches = confirmed ∪ adapted ∪ dismissed ∪ remaining (pairwise disjoint)Applicability over Correctness: When AI detects that a technically correct result may not fit the actual application context, it surfaces the mismatch with evidence rather than assuming the result is adequate. Correctness is necessary but not sufficient — contextual fit determines whether the result serves its purpose.
Formal predicate: correct(R) ∧ ¬warranted(R, X) — the output is correct but not warranted in this context (Dewey's warranted assertibility; Ryle's knowing-how vs knowing-that).
| Protocol | Initiator | Deficit → Resolution | Focus |
|---|---|---|---|
| Prothesis | AI-guided | FrameworkAbsent → FramedInquiry | Perspective selection |
| Syneidesis | AI-guided | GapUnnoticed → AuditedDecision | Decision-point gaps |
| Hermeneia | Hybrid | IntentMisarticulated → ClarifiedIntent | Expression clarification |
| Telos | AI-guided | GoalIndeterminate → DefinedEndState | Goal co-construction |
| Aitesis | AI-guided | ContextInsufficient → InformedExecution | Pre-execution context inference |
| Epitrope | AI-guided | DelegationAmbiguous → CalibratedDelegation | Delegation calibration |
| Analogia | AI-guided | MappingUncertain → ValidatedMapping | Abstract-concrete mapping validation |
| Prosoche | User-initiated | ExecutionBlind → SituatedExecution | Execution-time risk evaluation |
| Epharmoge | AI-guided | ApplicationDecontextualized → ContextualizedExecution | Post-execution applicability |
| Katalepsis | User-initiated | ResultUngrasped → VerifiedUnderstanding | Comprehension verification |
Key differences:
P'≅R) — Epharmoge verifies the result's applicability to context (R≅X). Convergence conditions are structurally incompatible.Context fitness axis: Aitesis and Epharmoge form a pre/post pair on the context fitness axis. Aitesis asks "do I have enough context to execute well?" (factual uncertainties, User→AI). Epharmoge asks "does my execution actually fit the context?" (evaluative mismatches, AI→User). They are complementary, not redundant — Aitesis may gather sufficient context, yet the resulting execution may still not fit contextual constraints that only become visible post-execution.
Independence from Aitesis: Epharmoge's information source is the execution result itself (R) compared against observed context (X), not a re-scan of pre-execution context. This ensures non-circularity — even when Aitesis has fully resolved context uncertainties, Epharmoge can detect mismatches that emerge only from the actual output.
This protocol is conditional. AI-guided activation (Layer 2) requires operational experience with Aitesis (④) to validate the pre/post context fitness axis. Until this gate is satisfied, Epharmoge exists as a formal specification only and must not auto-activate via Layer 2.
Activation criteria: Observed pattern of "context gathered but application mismatched" in Aitesis inference operational data.
User-invocable activation (Layer 1 /
/contextualize) is always available regardless of this gate.
AI detects applicability mismatch after execution OR user calls /contextualize. Detection is silent (Phase 0); surfacing always requires user interaction via AskUserQuestion (Phase 1).
Application decontextualized = the execution result is technically correct but may not fit the actual application context.
Gate predicate:
decontextualized(R, X) ≡ correct(R) ∧ ∃ aspect(a, R, X) : ¬warranted(a, R, X)Activation layers:
/contextualize slash command or description-matching input. Available regardless of conditional gate.Supersedes: Default post-execution patterns (move to next task without applicability check)
Retained: Safety boundaries, tool restrictions, user explicit instructions
Action: At Phase 1, call AskUserQuestion tool to present mismatch evidence for user judgment. </system-reminder>
Protocol precedence: Default ordering places Epharmoge after Prosoche (execution-time attention before post-execution applicability) and before Katalepsis (applicability before comprehension). Katalepsis is structurally last — it requires completed AI work (R), so it is not subject to ordering choices. The user can override this default by explicitly requesting a different protocol first.
Heuristic signals for applicability mismatch detection (not hard gates):
| Signal | Detection |
|---|---|
| Environment assumption | Result assumes environment state not verified in current context |
| Convention mismatch | Result follows general best practices but project has local conventions |
| Scope overflow | Result addresses more or less than the observed use case requires |
| Temporal context | Result applies to a version, state, or phase that may have shifted |
Skip:
| Trigger | Effect |
|---|---|
| All mismatches resolved (adapted or dismissed) | Proceed with contextualized result |
| No mismatches detected (Phase 0 passes) | Execution stands as-is |
| User Esc key | Accept result without applicability review |
Mismatches are identified dynamically per execution result — no fixed taxonomy. Each mismatch is characterized by:
| Level | Criterion | Action |
|---|---|---|
| Critical | Result actively harmful in current context | Must resolve before using result |
| Significant | Result suboptimal or partially inappropriate | Surface to user for judgment |
| Minor | Result adequate but could fit better | Surface with pre-selected Dismiss option |
When multiple mismatches are identified, surface in severity order (Critical → Significant → Minor). Only one mismatch surfaced per Phase 1 cycle.
Evaluate execution result against application context. This phase is silent — no user interaction.
R against context X: environment state, project conventions, use case scope, temporal validity, user constraintscorrect(R) ∧ fits(R, X) (i.e., warranted(R, X))Mᵢ with aspect, description, evidence, severity — proceed to Phase 1Information source: The execution result R itself compared against observable context X. NOT a re-scan of pre-execution context (non-circularity with Aitesis).
Scan scope: Completed execution output, project structure, observed conventions, session context. Does NOT re-execute or modify files.
Call the AskUserQuestion tool to present the highest-severity remaining mismatch.
Do NOT present mismatches as plain text. The tool call is mandatory — text presentation without tool = protocol violation.
Surfacing format (natural integration with execution completion):
Done. One thing to verify about applicability:
[Specific mismatch description]
[Evidence: what in the result and what in the context diverge]
Progress: [N addressed / M total mismatches]
Options:
1. **Confirm** — yes, this needs adaptation: [brief direction prompt]
2. **Dismiss** — acceptable as-is: [stated assumption about context fit]If adaptation direction is evident, include:
3. **[Specific adaptation]** — [what would change and why]Design principles:
R and XAfter user response:
R' using Edit/Write toolsAfter adaptation:
R' against X for remaining or newly emerged mismatches(Mismatch, A) to history| Level | When | Format |
|---|---|---|
| Light | Minor severity mismatches only | AskUserQuestion with Dismiss as default option |
| Medium | Significant severity, evidence is clear | Structured AskUserQuestion with evidence |
| Heavy | Critical severity, multiple interacting mismatches | Detailed evidence + adaptation options |
| Rule | Structure | Effect |
|---|---|---|
| Gate specificity | activate(Epharmoge) only if correct(R) ∧ ∃ ¬warranted(a, R, X) | Prevents false activation on well-fitting results |
| Mismatch cap | One mismatch per Phase 1 cycle, severity order | Prevents post-execution question overload |
| Session immunity | Dismissed (aspect, description) → skip for session | Respects user's dismissal |
| Progress visibility | [N addressed / M total] in Phase 1 | User sees progress toward completion |
| Early exit | User can dismiss all at any Phase 1 | Full control over review depth |
| Cross-protocol cooldown | suppress(Epharmoge) if Aitesis.resolved_in_same_scope ∧ overlap(Aitesis.domains, Epharmoge.aspects) | Prevents same-scope pre+post stacking |
| Cooldown scope | Cooldown applies within recommendation chains only; direct /contextualize invocation is never suppressed | User authority preserved |
| Natural integration | "Done. One thing to verify:" pattern | Fits completion flow, not interrogation |
R and context X, not speculation/contextualize) is always availabled60dcf0
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.