This skill should be used when the user asks to "verify protocols", "check consistency before commit", "validate definitions", "run pre-commit checks", "verify soundness", or wants to ensure epistemic protocol quality. Invoke explicitly with /verify for pre-commit validation.
Install with Tessl CLI
npx tessl i github:jongwony/epistemic-protocols --skill verify69
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description focuses heavily on when to use the skill (trigger phrases) but fails to explain what the skill actually does. The domain-specific jargon ('epistemic protocol quality', 'soundness') is never defined, leaving Claude unable to understand the skill's concrete capabilities. The description inverts the typical problem - it has triggers but lacks substance.
Suggestions
Add concrete action descriptions explaining what 'verify protocols' actually does (e.g., 'Validates YAML frontmatter structure, checks cross-references between protocol files, ensures required fields are present').
Define what 'epistemic protocols' are in plain language so Claude understands the domain this skill applies to.
Include expected outputs or outcomes (e.g., 'Reports validation errors, missing references, or confirms protocols are ready for commit').
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague, abstract language like 'verify protocols', 'check consistency', 'validate definitions', and 'ensure epistemic protocol quality' without explaining what concrete actions are performed or what 'epistemic protocols' actually are. | 1 / 3 |
Completeness | The description has a 'Use when...' clause with trigger phrases, but the 'what does this do' portion is extremely weak - it never explains what verification actually entails, what protocols are being checked, or what outputs to expect. | 2 / 3 |
Trigger Term Quality | Includes some trigger terms like 'verify protocols', 'pre-commit checks', 'validate definitions', and the explicit '/verify' command, but these are domain-specific jargon that typical users may not naturally use. Missing common variations or plain-language alternatives. | 2 / 3 |
Distinctiveness Conflict Risk | The term 'epistemic protocol' provides some distinctiveness, and the '/verify' command is explicit, but phrases like 'pre-commit checks' and 'validate definitions' could overlap with general code validation or linting skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted skill with excellent workflow clarity and actionability. The five-phase verification process is clearly sequenced with explicit validation steps and user decision handling. The main weakness is that the inline expert review prompts are verbose and could potentially be moved to reference files, though they do provide immediate actionability.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some verbose elements like the full subagent prompt templates inline (which could be referenced) and explanatory tables that could be tightened. The principle tables and severity references add value but the expert review prompts are lengthy. | 2 / 3 |
Actionability | Provides fully executable bash commands, specific JSON output formats, concrete prompt templates for subagents, and clear decision matrices. The workflow is copy-paste ready with specific file paths and tool invocations. | 3 / 3 |
Workflow Clarity | Excellent five-phase workflow with explicit sequencing (Static Checks → Expert Review → Synthesize → Surface → Handle Decision). Includes validation checkpoints, error handling table, and clear feedback loops for user decisions. The parallel subagent execution is well-documented. | 3 / 3 |
Progressive Disclosure | Well-structured with clear overview sections and appropriate references to external files (references/criteria.md, references/review-checklists.md). Content is organized into logical phases with tables for quick reference. The skill appropriately keeps core workflow inline while deferring detailed checklists. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.