Detect application-context mismatch after execution. Verifies applicability when correct output may not fit the actual context, producing contextualized execution. Type: (ApplicationDecontextualized, AI, CONTEXTUALIZE, ExecutionResult) → ContextualizedExecution. Alias: Epharmoge(ἐφαρμογή).
34
17%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./epharmoge/skills/contextualize/SKILL.mdDetect application-context mismatch after execution through AI-guided applicability verification, where correct results that may not fit the actual context are surfaced for user judgment. Type: (ApplicationDecontextualized, AI, CONTEXTUALIZE, ExecutionResult) → ContextualizedExecution.
Epharmoge (ἐφαρμογή): A dialogical act of verifying that AI-produced results fit the actual application context — from Aristotle's notion of practical application — resolving the gap between technical correctness and contextual appropriateness through structured mismatch surfacing and user-directed adaptation.
── FLOW ──
Epharmoge(R, X) → Eval(R, X) → Mᵢ? → Register(Mᵢ) → Q(Mᵢ[0]) → A → R' → Eval(R', X) → Mₑ? → (loop until contextualized)
── MORPHISM ──
(R, X)
→ evaluate(result, context) -- detect applicability mismatch
→ surface(mismatch, as_inquiry) -- present mismatch with evidence
→ adapt(result, direction) -- adapt result to context
→ ContextualizedExecution
requires: mismatch_detected(R, X) -- runtime gate (Phase 0)
deficit: ApplicationDecontextualized -- activation precondition (Layer 1/2)
preserves: X -- application context is fixed reference; morphism transforms R only
invariant: Applicability over Correctness
── TYPES ──
R = Execution result (AI's completed work output)
X = Application context (environment, constraints, user situation)
Eval = Applicability evaluation: (R, X) → Set(Mismatch)
Mismatch = { aspect: String, dimension: Dimension, description: String, evidence: String, severity: Severity, origin: Origin }
Dimension ∈ {Convention, Environment, Audience, Dependency} ∪ Emergent(Dimension)
Origin ∈ {Initial, Emerged(aspect)} -- mismatch provenance: initial scan or spawned by adapting parent aspect
Severity ∈ {Critical, Significant, Minor}
Mᵢ = Identified mismatches from Eval(R, X) -- origin = Initial
Mₑ = Newly emerged mismatches from Eval(R', X) -- origin = Emerged(adapted_aspect)
Register = Mᵢ → Set(Task) [Tool: TaskCreate] -- mismatch registration as tracked tasks
Q = Applicability inquiry (gate interaction)
A = User answer ∈ {Confirm(mismatch), Adapt(direction), Dismiss}
R' = Adapted result (contextualized output)
ContextualizedExecution = R' where (∀ task ∈ registered: task.status = completed) ∨ user_esc
── PHASE TRANSITIONS ──
Phase 0: R → Eval(R, X) → Mᵢ? -- applicability gate (silent)
Phase 1: Mᵢ → TaskCreate[all mismatches] → Qc(Mᵢ[0], evidence) → Stop → A -- register all, surface first [Tool]
Phase 2: A → adapt(A, R) → R' → TaskUpdate → Eval(R', X) → Mₑ? -- adaptation + update + re-scan [Tool]
── LOOP ──
After Phase 2: re-scan R' against X for remaining AND newly emerged mismatches.
If new mismatches from adaptation (Mₑ): TaskCreate → add to queue.
If remaining non-empty: return to Phase 1 (next by severity).
If adjudicated(R', X): all tasks completed → convergence.
User can exit at Phase 1 (early_exit option or Esc).
Continue until: contextualized(R') OR user ESC.
Mode remains active until convergence.
Convergence evidence: At adjudicated(R', X), present transformation trace — for each (m, _) ∈ Λ.state.history, show (ApplicationDecontextualized(m) → adaptation_result(m)). Convergence is demonstrated, not asserted.
── CONVERGENCE ──
applicable(R', X) = ∀ aspect(a, R', X) : warranted(a, R', X)
warranted(a, R, X) = correct(R) ∧ fits(R, X) -- correctness AND contextual fit required (not material conditional)
adjudicated(R', X) = ∀ aspect(a, R', X) : warranted(a, R', X) ∨ dismissed(a)
contextualized(R') = adjudicated(R', X) ∨ user_esc
-- stratification: applicable(R', X) ⊆ adjudicated(R', X)
-- operational proxy: ∀ task completed ⟹ adjudicated(R', X) ⟹ contextualized(R')
progress(Λ) = |completed_tasks| / |total_tasks| -- may regress when re-scan discovers new mismatches
── TOOL GROUNDING ──
-- Realization: gate → TextPresent+Stop; relay → TextPresent+Proceed
Eval (detect) → Internal analysis (no external tool)
Qc (gate) → present (mandatory; Esc key → loop termination at LOOP level, not an Answer)
adapt (modify) → Edit, Write (result adaptation based on user direction)
-- (modify): tool call that changes existing artifacts (distinct from (extern) user-facing, (detect) read-only, (state) internal)
Mᵢ/Mₑ (state) → TaskCreate/TaskUpdate (mismatch tracking with progress visibility)
converge (relay) → TextPresent+Proceed (convergence evidence trace; proceed with contextualized execution)
── ELIDABLE CHECKPOINTS ──
-- Axis: relay/gated = interaction kind; always_gated/elidable = regret profile
Phase 1 Qc (applicability) → always_gated (gated: Confirm/Dismiss/Adapt applicability judgment)
── MODE STATE ──
Λ = { phase: Phase, R: Result, X: Context,
state: Σ, active: Bool, cause_tag: String }
Σ = { history: List<(Mismatch, A)>, scan_count: Nat }Applicability over Correctness: When AI detects that a technically correct result may not fit the actual application context, it surfaces the mismatch with evidence rather than assuming the result is adequate. Correctness is necessary but not sufficient — contextual fit determines whether the result serves its purpose.
Formal predicate: correct(R) ∧ ¬warranted(R, X) — the output is correct but not warranted in this context (Dewey's warranted assertibility; Ryle's knowing-how vs knowing-that).
| Protocol | Initiator | Deficit → Resolution | Focus |
|---|---|---|---|
| Prothesis | AI-guided | FrameworkAbsent → FramedInquiry | Perspective selection |
| Syneidesis | AI-guided | GapUnnoticed → AuditedDecision | Decision-point gaps |
| Hermeneia | Hybrid | IntentMisarticulated → ClarifiedIntent | Expression clarification |
| Telos | AI-guided | GoalIndeterminate → DefinedEndState | Goal co-construction |
| Horismos | AI-guided | BoundaryUndefined → DefinedBoundary | Epistemic boundary definition |
| Aitesis | AI-guided | ContextInsufficient → InformedExecution | Context sufficiency sensing |
| Analogia | AI-guided | MappingUncertain → ValidatedMapping | Abstract-concrete mapping validation |
| Prosoche | User-initiated | ExecutionBlind → SituatedExecution | Risk-assessed execution |
| Epharmoge | AI-guided | ApplicationDecontextualized → ContextualizedExecution | Post-execution applicability |
| Katalepsis | User-initiated | ResultUngrasped → VerifiedUnderstanding | Comprehension verification |
Key differences:
P'≅R) — Epharmoge verifies the result's applicability to context (R≅X). Convergence conditions are structurally incompatible.Context fitness axis: Aitesis and Epharmoge form a pre/post pair on the context fitness axis. Aitesis asks "do I have enough context to execute well?" (factual uncertainties, User→AI). Epharmoge asks "does my execution actually fit the context?" (evaluative mismatches, AI→User). They are complementary, not redundant — Aitesis may gather sufficient context, yet the resulting execution may still not fit contextual constraints that only become visible post-execution.
Independence from Aitesis: Epharmoge's information source is the execution result itself (R) compared against observed context (X), not a re-scan of pre-execution context. This ensures non-circularity — even when Aitesis has fully resolved context uncertainties, Epharmoge can detect mismatches that emerge only from the actual output.
This protocol is conditional. AI-guided activation (Layer 2) requires operational experience with Aitesis (④) to validate the pre/post context fitness axis. Until this gate is satisfied, Epharmoge exists as a formal specification only and must not auto-activate via Layer 2.
Activation criteria: Observed pattern of "context gathered but application mismatched" in Aitesis inference operational data.
User-invocable activation (Layer 1 /
/contextualize) is always available regardless of this gate.
AI detects applicability mismatch after execution OR user calls /contextualize. Detection is silent (Phase 0); surfacing always requires user interaction via gate interaction (Phase 1).
Application decontextualized = the execution result is technically correct but may not fit the actual application context.
Gate predicate:
decontextualized(R, X) ≡ correct(R) ∧ ∃ aspect(a, R, X) : ¬warranted(a, R, X)Activation layers:
/contextualize slash command or description-matching input. Available regardless of conditional gate.Supersedes: Default post-execution patterns (move to next task without applicability check)
Retained: Safety boundaries, tool restrictions, user explicit instructions
Action: At Phase 1, present mismatch evidence via gate interaction and yield turn. </system-reminder>
Protocol precedence: Activation order position 9/9 (graph.json is authoritative source for information flow). Concern cluster: Verification.
Advisory relationships: Receives from Prosoche (advisory: execution-time attention provides post-execution applicability context); Aitesis (suppression: pre+post stacking prevention). Katalepsis is structurally last.
Heuristic signals for applicability mismatch detection (not hard gates):
| Signal | Detection |
|---|---|
| Environment assumption | Result assumes environment state not verified in current context |
| Convention mismatch | Result follows general best practices but project has local conventions |
| Scope overflow | Result addresses more or less than the observed use case requires |
| Temporal context | Result applies to a version, state, or phase that may have shifted |
Skip:
| Trigger | Effect |
|---|---|
| All mismatch tasks completed (adapted or dismissed) | Proceed with contextualized result |
| No mismatches detected (Phase 0 passes) | Execution stands as-is |
| User Esc key | Accept result without applicability review |
Mismatches are identified across named dimensions — working hypotheses for systematic detection, not exhaustive categories.
| Dimension | Detection | Question Form |
|---|---|---|
| Convention | Result follows general patterns but project has local conventions | "This follows best practices, but your project uses [local pattern]" |
| Environment | Result assumes environment state that differs from actual deployment context | "This assumes [env state], but your context has [actual state]" |
| Audience | Result targets a different audience than the actual consumers | "This is written for [assumed audience], but [actual audience] will use it" |
| Dependency | Result interacts with components whose constraints weren't considered | "This depends on [component] which has [constraint not considered]" |
Emergent mismatch detection: Named dimensions are working hypotheses, not exhaustive categories. Detect Emergent mismatches when:
Emergent mismatches must satisfy morphism ApplicationDecontextualized → ContextualizedExecution; boundary: contextual fit (in-scope) vs. intent expression (→ /clarify) or decision gaps (→ /gap).
Each mismatch is characterized by:
| Level | Criterion | Action |
|---|---|---|
| Critical | Result actively harmful in current context | Must resolve before using result |
| Significant | Result suboptimal or partially inappropriate | Surface to user for judgment |
| Minor | Result adequate but could fit better | Surface with pre-selected Dismiss option |
When multiple mismatches are identified, surface in severity order (Critical → Significant → Minor). Only one mismatch surfaced per Phase 1 cycle.
Evaluate execution result against application context. This phase is silent — no user interaction.
R against context X: environment state, project conventions, use case scope, temporal validity, user constraintscorrect(R) ∧ fits(R, X) (i.e., warranted(R, X))Mᵢ with aspect, description, evidence, severity, origin=Initial — proceed to Phase 1Information source: The execution result R itself compared against observable context X. NOT a re-scan of pre-execution context (non-circularity with Aitesis).
Scan scope: Completed execution output, project structure, observed conventions, session context. Does NOT re-execute or modify files.
Register all identified mismatches as Tasks (TaskCreate), then present the highest-severity remaining mismatch via gate interaction.
Task format:
TaskCreate({
subject: "[Mismatch:aspect] description",
description: "Evidence and context for this mismatch (severity: X)",
activeForm: "Surfacing [aspect] mismatch"
})Do NOT bypass the gate. Structured presentation with turn yield is mandatory — presenting content without yielding for response = protocol violation.
Surfacing format (natural integration with execution completion):
Present the mismatch findings as text output:
Then present:
How would you like to handle this applicability mismatch?
Options:
1. **Confirm** — yes, this needs adaptation: [brief direction prompt]
2. **Dismiss** — acceptable as-is: [stated assumption about context fit]If adaptation direction is evident, materialize Adapt(direction) as a concrete option:
3. **[Specific adaptation]** — [what would change and why]This is a contextual materialization of Adapt(direction) — the formal answer type remains Adapt, with the direction pre-populated from AI analysis.
Design principles:
R and XAfter user response:
R' using Edit/Write tools → TaskUpdate (completed)After adaptation — re-scan:
R' against X for remaining AND newly emerged mismatchesMₑ) from adaptation: TaskCreate with origin=Emerged(adapted_aspect) → add to queue(Mismatch, A) to Σ.history, increment Σ.scan_countRe-scan trigger: Adaptation changes R, and changed R' may exhibit new mismatches not present in the original result. Always re-scan after each adaptation — any adaptation may introduce mismatches in dimensions unrelated to the original aspect.
Chain discovery: When Mₑ emerges from an adaptation, the origin = Emerged(parent_aspect) field records the causal chain. This enables:
| Level | When | Format |
|---|---|---|
| Light | Minor severity mismatches only | Gate interaction with Dismiss as default option |
| Medium | Significant severity, evidence is clear | Structured gate interaction with evidence |
| Heavy | Critical severity, multiple interacting mismatches | Detailed evidence + adaptation options |
| Rule | Structure | Effect |
|---|---|---|
| Gate specificity | activate(Epharmoge) only if correct(R) ∧ ∃ ¬warranted(a, R, X) | Prevents false activation on well-fitting results |
| Mismatch cap | One mismatch per Phase 1 cycle, severity order | Prevents post-execution question overload |
| Session immunity | Dismissed (aspect, description) → skip for session | Respects user's dismissal |
| Progress visibility | Task list renders [N addressed / M total] in Phase 1 | User sees progress; total may grow on re-scan |
| Early exit | User can dismiss all at any Phase 1 | Full control over review depth |
| Cross-protocol cooldown | suppress(Epharmoge) if Aitesis.resolved_in_same_scope ∧ overlap(Aitesis.domains, Epharmoge.aspects) | Prevents same-scope pre+post stacking |
| Cooldown scope | Cooldown applies within recommendation chains only; direct /contextualize invocation is never suppressed | User authority preserved |
| Natural integration | "Done. One thing to verify:" pattern | Fits completion flow, not interrogation |
R and context X, not speculation/contextualize) is always available9342160
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.