Clarify intent-expression gaps. Extracts clarified intent when what you mean differs from what you said. Type: (IntentMisarticulated, Hybrid, EXTRACT, Expression) → ClarifiedIntent. Alias: Hermeneia(ἑρμηνεία).
34
17%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./hermeneia/skills/clarify/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is heavily laden with academic jargon, invented type signatures, and a Greek alias that provide no practical value for skill selection. It fails to describe concrete actions in plain language and completely lacks explicit trigger guidance ('Use when...'). A user needing help rephrasing or clarifying their intent would never use the terminology present in this description.
Suggestions
Replace jargon and type signatures with plain-language concrete actions, e.g., 'Helps rephrase unclear requests, identifies what the user likely meant when their wording is ambiguous or confusing.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when a user's message is ambiguous, poorly worded, or when they say something like "that's not what I meant" or "let me rephrase."'
Remove the alias notation and type signature (e.g., 'Hermeneia(ἑρμηνεία)' and '(IntentMisarticulated, Hybrid, EXTRACT, Expression) → ClarifiedIntent') as they add no selection value and obscure the skill's purpose.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses abstract, jargon-heavy language like 'intent-expression gaps', 'IntentMisarticulated', 'Hybrid', and 'Hermeneia(ἑρμηνεία)'. It does not list concrete, understandable actions a user would recognize. The type signature notation is opaque rather than descriptive. | 1 / 3 |
Completeness | The 'what' is vaguely stated ('extracts clarified intent') but not in concrete terms. There is no 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill, which per the rubric caps completeness at 2 at best, but the weak 'what' brings it to 1. | 1 / 3 |
Trigger Term Quality | No natural keywords a user would actually say are present. Terms like 'IntentMisarticulated', 'Hermeneia', and 'Expression' are technical jargon or invented taxonomy labels, not phrases users would use when seeking help clarifying what they mean. | 1 / 3 |
Distinctiveness Conflict Risk | The concept of clarifying intent-expression gaps is somewhat niche and unlikely to overlap with common skills like PDF processing or code generation. However, the description is so abstract that it could potentially conflict with any conversational clarification or disambiguation skill. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is an extremely over-specified formal protocol that prioritizes theoretical completeness over practical usability. While the underlying intent (clarifying user expression gaps through structured dialogue) is sound, the implementation buries actionable guidance under layers of formal type theory, morphism definitions, and exhaustive edge case handling. The document would benefit enormously from radical condensation — extracting the core workflow into ~50 lines with references to detailed specifications for edge cases and formal definitions.
Suggestions
Reduce the main skill body to a concise overview (~50-80 lines) covering the core workflow phases with concrete examples, and move formal type definitions, morphism notation, and state machine specifications to a separate REFERENCE.md file.
Remove or relocate the 12-protocol comparison table — it explains other protocols Claude doesn't need to understand to execute this one; a brief 1-2 sentence boundary note suffices inline.
Consolidate the 20 rules into 5-7 essential principles; many rules restate what's already specified in the phase descriptions (e.g., rules 13-20 are implementation details that could be in a separate RULES.md).
Add a concrete end-to-end example showing a real user expression being clarified through the phases, demonstrating the actual dialogue flow rather than abstract template formats.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and over-engineered. The formal type theory notation, morphism definitions, and extensive state machine specifications are far beyond what Claude needs to perform intent clarification. Much of this content explains abstract concepts and formal semantics that Claude could infer from a much shorter description. The document is ~400+ lines for what could be expressed in ~50-80 lines. | 1 / 3 |
Actionability | The phases are described with some concrete examples (e.g., option presentation formats in Phase 0, 1a, 2), but much of the guidance is abstract formal specification rather than executable steps. The code blocks show template formats rather than real executable code, and the formal notation (morphisms, type definitions) describes rather than instructs. | 2 / 3 |
Workflow Clarity | The multi-phase workflow (Phase 0→1a→1b→2→3→loop) is clearly sequenced with explicit transitions and convergence conditions. However, the workflow is buried under layers of formal notation and redundant specification. Validation checkpoints exist (user confirmation gates) but the sheer complexity makes it hard to follow. The loop termination conditions are well-defined but scattered across multiple sections. | 2 / 3 |
Progressive Disclosure | The entire specification is a monolithic wall of text with no references to external files for detailed content. The formal type definitions, gap taxonomy, phase specifications, rules, and edge cases are all inline. Only one external reference exists (references/socratic-style.md). Content like the comparison table with 12 other protocols, the detailed response classification discriminator, and the 20 rules could all be split into separate reference files. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
a3403de
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.