Achieve certain comprehension after AI work. Verifies understanding when results remain ungrasped, producing verified understanding. Type: (ResultUngrasped, User, VERIFY, Result) → VerifiedUnderstanding. Alias: Katalepsis(κατάληψις).
31
13%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./katalepsis/skills/grasp/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description fails on all dimensions due to its use of abstract philosophical terminology, lack of concrete actions, absence of natural trigger terms, and missing explicit usage guidance. The cryptic type signature format and Greek term 'Katalepsis' make it nearly impossible for Claude to know when to select this skill or what it actually does.
Suggestions
Replace abstract language with concrete actions (e.g., 'Explains complex results step-by-step, provides analogies, and confirms user comprehension through follow-up questions').
Add a 'Use when...' clause with natural trigger terms like 'I don't understand', 'explain this', 'what does this mean', 'confused about', 'clarify'.
Remove the cryptic type signature and Greek terminology; describe the skill in plain language that matches how users naturally express confusion or need for clarification.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses abstract philosophical language ('Achieve certain comprehension', 'verified understanding', 'Katalepsis') without describing any concrete actions. There are no specific capabilities like 'explains concepts', 'breaks down complex topics', or 'provides examples'. | 1 / 3 |
Completeness | The 'what' is extremely vague ('Verifies understanding') and there is no explicit 'when' clause or trigger guidance. The type signature format is cryptic and doesn't help Claude know when to select this skill. | 1 / 3 |
Trigger Term Quality | No natural keywords users would say. Terms like 'ResultUngrasped', 'VerifiedUnderstanding', and 'Katalepsis(κατάληψις)' are technical jargon and Greek philosophical terms that users would never naturally use when asking for help understanding something. | 1 / 3 |
Distinctiveness Conflict Risk | The description is so abstract and vague that it could theoretically apply to almost any explanatory or educational skill. 'Verifies understanding when results remain ungrasped' could conflict with any teaching, tutoring, or explanation skill. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill suffers from severe over-engineering with formal type theory notation, philosophical terminology, and excessive abstraction that obscures the core task: helping users understand AI work through Socratic questioning. The content contains useful concepts (gap taxonomy, verification phases, task tracking) but buries them under unnecessary formalism. The skill would benefit dramatically from a 90% reduction in length, removing the formal notation entirely, and splitting reference material into separate files.
Suggestions
Remove all formal notation (morphisms, type signatures, flow diagrams) and replace with plain English workflow steps - Claude doesn't need category theory to understand 'ask verification questions'
Split the Gap Taxonomy, Category Taxonomy, and Rules sections into separate reference files (e.g., GAPS.md, CATEGORIES.md) and link from a concise overview
Replace pseudo-YAML gate interaction examples with concrete, copy-paste ready question templates showing actual user-facing text
Add explicit validation checkpoints with 'if validation fails' recovery steps, especially for the misconception handling workflow
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive formal notation, type theory, and philosophical framing that Claude doesn't need. The 'morphism' and 'flow' notation adds complexity without actionable value, and concepts like 'phantasia' and Greek terminology are unnecessary overhead. | 1 / 3 |
Actionability | Contains concrete steps and phase descriptions, but the formal notation obscures the actual actions. The protocol steps are buried under layers of abstraction. Gate interaction examples use pseudo-YAML but lack executable code or copy-paste ready commands. | 2 / 3 |
Workflow Clarity | Multi-step process is defined with phases 0-3 and clear sequencing, but validation checkpoints are implicit rather than explicit. The loop termination conditions exist but are scattered across multiple sections. Missing explicit 'if X fails, do Y' recovery patterns. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. All content is inline including detailed taxonomies, rules, and examples that could be split into separate reference documents. The 500+ line document has no clear navigation structure for quick reference. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
5f3bed3
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.