Context-adaptive delegation calibration through scenario-based interview. Detects execution context to select entry mode (Solo/TeamAugment/TeamRestructure) and produces a DelegationContract. Type: (DelegationAmbiguous, AI, CALIBRATE, TaskScope) → CalibratedDelegation. Alias: Epitrope(ἐπιτροπή).
Install with Tessl CLI
npx tessl i github:jongwony/epistemic-protocols --skill calibrate48
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
35%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is heavily laden with technical jargon and proprietary terminology (DelegationContract, CalibratedDelegation, Epitrope) that would be meaningless to most users and difficult for Claude to match against natural language requests. While it attempts to describe functionality, it reads more like an internal API specification than a user-facing skill description. The Greek alias adds confusion rather than clarity.
Suggestions
Replace technical jargon with plain language describing user-facing benefits (e.g., 'Helps determine how to divide work between human and AI collaborators' instead of 'Context-adaptive delegation calibration')
Add explicit 'Use when...' clause with natural trigger terms users would say, such as 'delegate tasks', 'team collaboration', 'assign work', 'who should do what'
Remove or minimize proprietary type signatures and aliases that don't help Claude match user intent
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names domain ('delegation calibration', 'scenario-based interview') and some actions ('Detects execution context', 'select entry mode', 'produces a DelegationContract'), but uses abstract/technical language that doesn't clearly convey concrete user-facing actions. | 2 / 3 |
Completeness | Describes what it does (calibration through interview, produces DelegationContract) but lacks explicit 'Use when...' guidance. The type signature hints at when but doesn't provide natural language triggers for Claude to match against user requests. | 2 / 3 |
Trigger Term Quality | Uses highly technical jargon ('DelegationAmbiguous', 'CalibratedDelegation', 'TaskScope', 'ἐπιτροπή') that users would never naturally say. No common terms like 'delegate', 'assign tasks', 'team coordination' that users might actually use. | 1 / 3 |
Distinctiveness Conflict Risk | The technical terminology makes it somewhat distinct, but 'delegation' and 'team' concepts could overlap with project management or task assignment skills. The niche is unclear without understanding the proprietary terminology. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content prioritizes formal academic specification over practical guidance. While it demonstrates sophisticated protocol design with clear phase transitions and type definitions, the dense notation (morphisms, type signatures, set theory) adds significant cognitive overhead without proportional benefit for Claude's execution. The concrete examples (scenario formats, contract templates) are valuable but buried under excessive formalism.
Suggestions
Replace formal notation (FLOW, MORPHISM, TYPES sections) with a simple numbered workflow showing the 5 phases with 1-2 sentences each
Move the distinction table and detailed type definitions to a separate reference file, keeping only the core 'Calibration over Declaration' principle inline
Lead with the concrete scenario templates and contract formats (Phase 2/4 examples) as the primary content, since these are what Claude actually needs to execute
Consolidate the activation triggers, skip conditions, and deactivation rules into a single decision tree or flowchart rather than scattered tables
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive formal notation, type definitions, and abstract specifications that Claude doesn't need explained. The 'FLOW', 'MORPHISM', 'TYPES', and 'ENTRY TYPES' sections are dense academic formalism that could be condensed to simple procedural steps. | 1 / 3 |
Actionability | Contains concrete phase descriptions and example prompts (Phase 0 mode selection, Phase 2 scenario format, Phase 4 contract format), but the formal notation dominates over executable guidance. The actual tool calls and user interactions are buried under layers of type theory. | 2 / 3 |
Workflow Clarity | The phase transitions (0→1→2→3→4) are defined with explicit conditions, and the LOOP section describes iteration logic. However, validation checkpoints are implicit rather than explicit, and the formal notation obscures the practical workflow sequence. | 2 / 3 |
Progressive Disclosure | References external files (references/operational-guide.md, references/cross-protocol.md) appropriately, but the main document itself is a monolithic wall of formal specifications. The distinction table and rules section could be separate reference files. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.