Extract abstract mathematical models from imperative code (C, C++, Python, Java, etc.) suitable for formal reasoning in Coq. Use when the user asks to model imperative code in Coq, create Coq specifications from imperative programs, extract mathematical models for verification, or translate imperative algorithms to Coq for formal reasoning and proof.
88
82%
Does it follow best practices?
Impact
99%
1.03xAverage score across 3 eval scenarios
Passed
No known issues
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly defines a specialized capability for extracting mathematical models from imperative code for Coq verification. It uses third person voice, provides specific actions, lists supported languages, and includes a comprehensive 'Use when...' clause with multiple natural trigger phrases. The description effectively carves out a distinct niche that would be easily distinguishable from other code analysis or verification skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists specific concrete actions: 'Extract abstract mathematical models from imperative code' with explicit language support '(C, C++, Python, Java, etc.)' and clear purpose 'suitable for formal reasoning in Coq'. | 3 / 3 |
Completeness | Clearly answers both what ('Extract abstract mathematical models from imperative code suitable for formal reasoning in Coq') and when (explicit 'Use when...' clause with multiple trigger scenarios including modeling, specifications, verification, and translation). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'model imperative code in Coq', 'Coq specifications', 'mathematical models', 'verification', 'translate imperative algorithms', 'formal reasoning', 'proof'. Includes both technical terms and action-oriented phrases. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche combining imperative code analysis with Coq formal verification. The specific combination of 'imperative code' + 'Coq' + 'mathematical models' + 'formal reasoning' creates a clear, unique trigger profile unlikely to conflict with general code analysis or other verification skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides excellent actionable guidance with concrete, executable Coq code examples covering multiple imperative patterns. However, it's somewhat verbose with sections explaining concepts Claude already knows (key differences, limitations), and the workflow lacks explicit validation checkpoints and error recovery steps that would be important for this type of formal verification work.
Suggestions
Add explicit validation checkpoints in the workflow: 'If coqc fails, check for: missing imports, type mismatches, termination issues' with specific remediation steps
Move the 4 detailed examples to a separate EXAMPLES.md file and reference it, keeping only 1-2 examples inline
Remove or significantly condense the 'Key Differences' and 'Limitations and Considerations' sections - Claude understands these concepts
Move the Quick Reference table to the top of the document for faster access
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some unnecessary explanatory content like the 'Key Differences' and 'Limitations and Considerations' sections that Claude would already understand. The overview section could be trimmed. | 2 / 3 |
Actionability | Excellent concrete examples with fully executable Coq code for each pattern. The imperative-to-Coq translations are copy-paste ready with complete, working code snippets including imports and proper syntax. | 3 / 3 |
Workflow Clarity | The 6-step workflow is clearly sequenced, but validation is weak - Step 5 mentions 'coqc model.v' and comparing outputs but lacks explicit feedback loops for when type checking fails or semantics don't match. No clear error recovery guidance. | 2 / 3 |
Progressive Disclosure | References extraction_patterns.md appropriately, but the main document is quite long (~350 lines) with content that could be split out (e.g., the 4 detailed examples could be in a separate EXAMPLES.md). The quick reference table is good but buried mid-document. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
0f00a4f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.