CtrlK
BlogDocsLog inGet started
Tessl Logo

contextualize

Detect application-context mismatch after execution. Verifies applicability when correct output may not fit the actual context, producing contextualized execution. Type: (ApplicationDecontextualized, AI, CONTEXTUALIZE, Result) → ContextualizedExecution. Alias: Epharmoge(ἐφαρμογή).

34

Quality

17%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./epharmoge/skills/contextualize/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is extremely abstract and academic, using type signatures, Greek aliases, and jargon that would be incomprehensible to most users. It fails to describe concrete actions, lacks natural trigger terms, and provides no explicit guidance on when Claude should select this skill. It would be nearly impossible for Claude to correctly match this skill to a user request.

Suggestions

Replace abstract jargon with plain-language descriptions of what the skill actually does — e.g., 'Checks whether generated output fits the user's actual context and adjusts it accordingly.'

Add an explicit 'Use when...' clause with natural trigger terms, such as 'Use when the user says the output doesn't fit their situation, seems generic, or needs to be adapted to their specific context.'

Remove the type signature and Greek alias, which add no value for skill selection and obscure the skill's purpose.

DimensionReasoningScore

Specificity

The description uses highly abstract, jargon-heavy language like 'application-context mismatch', 'contextualized execution', and a type signature. No concrete actions a user would recognize are listed.

1 / 3

Completeness

The 'what' is buried in abstract language and the 'when' is entirely missing — there is no 'Use when...' clause or equivalent explicit trigger guidance. Both dimensions are very weak.

1 / 3

Trigger Term Quality

There are no natural keywords a user would say. Terms like 'ApplicationDecontextualized', 'Epharmoge(ἐφαρμογή)', and 'ContextualizedExecution' are opaque technical/academic jargon that no user would naturally use in a request.

1 / 3

Distinctiveness Conflict Risk

The description is so abstract and niche that it's unlikely to conflict with other skills, but it's also so vague that it's unclear what domain it belongs to, making it hard to positively distinguish rather than just being ignored.

2 / 3

Total

5

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is an extremely dense formal specification that prioritizes mathematical rigor and philosophical grounding over practical usability. While the underlying workflow (detect mismatch → surface → adapt → re-scan) is sound, it is buried under layers of formal notation, type theory, and extensive cross-protocol comparisons that consume enormous token budget without proportional actionability gains. The content would benefit dramatically from splitting into a concise operational overview and separate reference files.

Suggestions

Reduce the main file to ~50-80 lines covering the core workflow (Phase 0→1→2), surfacing format, and activation triggers; move formal type specifications, the 12-protocol comparison table, and detailed rules to separate reference files.

Add 1-2 concrete worked examples showing a real mismatch detection scenario end-to-end (e.g., 'User asks to set up a Python project → result uses pip but project uses poetry → mismatch surfaced → adapted').

Eliminate redundant representations of the same workflow — the FLOW, MORPHISM, PHASE TRANSITIONS, and LOOP sections all describe the same process in different notations; pick one and reference the formal spec externally.

Remove philosophical citations (Dewey, Ryle, Aristotle) and explanations of formal predicates that don't change Claude's operational behavior — these consume tokens without adding actionable guidance.

DimensionReasoningScore

Conciseness

Extremely verbose at ~400+ lines. Extensively explains formal type theory, morphisms, convergence predicates, and philosophical references (Dewey, Ryle, Aristotle) that Claude doesn't need spelled out. The formal specification notation is repeated in multiple equivalent forms (flow, morphism, types, phase transitions). The distinction table comparing 12+ protocols is excessive inline content.

1 / 3

Actionability

The protocol does provide concrete phase-by-phase steps and specific surfacing formats with example text patterns ('Done. One thing to verify:'). However, much of the content is abstract formal specification rather than executable guidance — the TaskCreate format is pseudocode, and there are no concrete worked examples showing a real mismatch detection scenario from start to finish.

2 / 3

Workflow Clarity

The Phase 0→1→2 workflow is clearly sequenced with re-scan loops and convergence criteria. However, the workflow is buried within dense formal notation and repeated in multiple equivalent representations (FLOW, MORPHISM, PHASE TRANSITIONS, LOOP sections all describe the same process). The validation/feedback loop exists but is obscured by formalism. The re-scan trigger and chain discovery mechanisms are well-specified but hard to follow operationally.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with no references to external files for detailed content. The 12-protocol comparison table, the full formal type system, the mismatch dimension taxonomy, UX safeguards, and 16 rules are all inline. Content that could be split (e.g., formal specification, comparison table, detailed rules) is all in one massive document with no navigation aids or external references.

1 / 3

Total

6

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
jongwony/epistemic-protocols
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.