This skill should be used when the user asks to "verify protocols", "check consistency before commit", "validate definitions", "run pre-commit checks", "verify soundness", or wants to ensure epistemic protocol quality. Invoke explicitly with /verify for pre-commit validation.
Install with Tessl CLI
npx tessl i github:jongwony/epistemic-protocols --skill verify69
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Verify epistemic protocol consistency before commit through static checks and expert review.
Surface potential issues in protocol definitions without blocking commits. Follow Anthropic philosophy: transparency over enforcement, user agency over automation.
| Principle | Implementation |
|---|---|
| Surface, don't enforce | Present findings; user decides action |
| Zero-context scripts | Static checks run without consuming context |
| Explicit control | User invokes /verify intentionally |
| Graduated severity | Critical / Concern / Note categorization |
Run scripts/static-checks.js against project root. This script executes without loading into context.
node ${SKILL_DIR}/scripts/static-checks.js ${PROJECT_ROOT}Output format:
{
"pass": [{ "check": "...", "file": "...", "message": "..." }],
"fail": [{ "check": "...", "file": "...", "message": "..." }],
"warn": [{ "check": "...", "file": "...", "message": "..." }]
}Checks performed:
call not invoke/use for tools)Spawn parallel Task subagents for LLM-based review. Each perspective analyzes independently.
Perspective 1: Type Design
Spawn Task subagent with prompt:
You are a Type Theory Expert analyzing protocol type design.
**Soundness check**:
- Type signatures: well-formed domain/codomain
- State machines: transition totality
- Flow consistency: types match phase transitions
- Tool grounding: TOOL GROUNDING ↔ PHASE TRANSITIONS consistency
**Tool Grounding check**:
- External operations (extern) have corresponding [Tool] notation in PHASE TRANSITIONS
- Internal operations marked with "no external tool"
- Escape behavior semantics match protocol context (fallback/Silence/cancel)
**Necessity check**:
- Cardinality constraints (|P| ≥ 2): prose sufficient given Esc interrupt?
- Refinement types: marginal benefit for LLM interpretation?
- Principle: no type checker exists; social enforcement via prose is primary
Files: prothesis/skills/frame/SKILL.md, syneidesis/skills/gap/SKILL.md, hermeneia/skills/clarify/SKILL.md
Focus on Definition sections (FLOW, TYPES, PHASES, TOOL GROUNDING). Output JSON with findings array.Perspective 2: Specification Clarity
Spawn Task subagent with prompt:
You are a Specification Clarity Expert analyzing LLM interpretability.
**LLM behavior modification**:
- Activation clarity: trigger conditions unambiguous
- Priority conflicts: supersession domain overlap
- Instruction parsability: rules actionable without implicit reasoning
- Tool binding clarity: TOOL GROUNDING section readable and actionable
**Notation complexity**:
- Flow formulas: understandable at a glance?
- Standard notation preferred (|P| ≥ 2) over custom constructors (Set²⁺)
- Section structure: FLOW → TYPES → PHASES → TOOL GROUNDING → STATE sufficient?
- [Tool] notation in PHASE TRANSITIONS: adds clarity or noise?
Principle: specification is for human/LLM comprehension, not compiler verification.
Files: prothesis/skills/frame/SKILL.md, syneidesis/skills/gap/SKILL.md, hermeneia/skills/clarify/SKILL.md, CLAUDE.md
Focus on Definition, Mode Activation, Priority, Rules, TOOL GROUNDING sections. Output JSON with findings array.Perspective 3: Claude Code Ecosystem
Spawn Task subagent with prompt:
You are a Claude Code Ecosystem Expert.
**Pattern validation**:
- AskUserQuestion mandates enforced (tool call, not text)
- Epistemic transitions correctly typed (deficit → resolved type signatures)
- User agency preserved (no automatic decisions)
**False positive filtering** (dismiss concerns from other perspectives):
- "Automatic intensity reduction" → not needed (AskUserQuestion provides control)
- "Automatic deactivation" → not needed (user can interrupt via Esc)
- "Topic boundary detection" → context-dependent (model judgment acceptable)
- "Type-level cardinality" → prose sufficient (low violation cost, Esc recovery)
Files: prothesis/skills/frame/SKILL.md, syneidesis/skills/gap/SKILL.md, hermeneia/skills/clarify/SKILL.md, CLAUDE.md
Focus on Mode Activation, Rules, UX patterns. Output JSON with findings and filtered arrays.All three subagents run in parallel. Collect results before proceeding.
Combine static check results with expert review findings. Categorize by severity:
| Severity | Static Check Source | LLM Review Source |
|---|---|---|
| Critical | fail array | findings with severity: "critical" |
| Concern | warn array (structural) | findings with severity: "concern" |
| Note | warn array (stylistic) | findings with severity: "note" |
Identify convergence (all perspectives agree) and divergence (perspectives differ).
Apply Claude Code Ecosystem expert's filtered array to dismiss false positives from other perspectives.
Call AskUserQuestion tool to present findings. Format:
## Verification Results
### Critical (n issues)
- [issue 1]: [location] - [description]
- [issue 2]: [location] - [description]
### Concerns (n issues)
- [issue 1]: [location] - [description]
### Notes (n observations)
- [observation 1]
### Filtered (Claude Code context)
- [filtered issue]: [reason dismissed]
---
Select action:Options to present:
| Decision | Action |
|---|---|
| Fix critical | Show specific fixes, await approval for each |
| Review concerns | Present concerns one by one with dismiss/address options |
| Proceed anyway | Log decision, suggest commit message annotation |
| Cancel | End verification, no changes |
Commit message annotation (if proceeding with issues):
[verify: n critical, m concerns acknowledged]Consult references/criteria.md for detailed severity definitions and decision matrix.
Consult references/review-checklists.md for:
| Error | Response |
|---|---|
| Script execution fails | Report error, offer manual check option |
| Subagent timeout | Report partial results, continue with available data |
| No issues found | Confirm clean state, proceed to commit |
Verification may trigger perspective selection if findings require analysis approach decision.
Critical findings surface as high-stakes gaps. Syneidesis gap detection may augment verification.
Most common pattern: invoke /verify before /commit command.
## Verification Results
All checks passed.
- Static checks: 12 pass, 0 fail, 0 warn
- Type/Category review: No issues
- Instruction Design review: No issues
Ready to commit.## Verification Results
### Critical (1 issue)
- State machine totality: prothesis.md:10 - Undefined transition when |perspectives(C)| < 2
### Concerns (2 issues)
- Categorical terminology: prothesis.md:35 - limit/colimit may not match intended semantics
- Directive verb: reflexion/SKILL.md:32 - "Invoke AskUserQuestion" should be "call"
### Notes (1 observation)
- Version: prothesis plugin.json version not bumped since last change
---
How to proceed?d60dcf0
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.