Automatically repair code violations of temporal properties using model-checking counterexamples as guidance. Use when working with formal verification results, temporal logic property violations (LTL, CTL), model checking counterexamples, or when asked to repair property violations, fix counterexamples, repair temporal properties, fix model checking violations, or repair code based on counterexamples. Applicable to concurrent systems, state machines, synchronization issues, safety/liveness properties, and resource management violations.
Install with Tessl CLI
npx tessl i github:ArabelaTso/Skills-4-SE --skill model-guided-code-repair82
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that excels across all dimensions. It provides specific capabilities, comprehensive trigger terms covering both technical terminology and user-friendly action phrases, explicit 'Use when...' guidance, and targets a highly specialized domain that minimizes conflict risk with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'repair code violations of temporal properties', 'using model-checking counterexamples as guidance'. Also specifies applicable domains: concurrent systems, state machines, synchronization issues, safety/liveness properties, resource management violations. | 3 / 3 |
Completeness | Clearly answers both what ('Automatically repair code violations of temporal properties using model-checking counterexamples') AND when with explicit 'Use when...' clause covering multiple trigger scenarios including formal verification, temporal logic violations, and specific repair requests. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'formal verification results', 'temporal logic property violations', 'LTL, CTL', 'model checking counterexamples', 'repair property violations', 'fix counterexamples', 'repair temporal properties', 'fix model checking violations'. Includes both technical terms and action-oriented phrases. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche focused on formal verification and model checking - a specialized domain unlikely to conflict with general code repair or debugging skills. The specific mentions of LTL, CTL, counterexamples, and temporal properties create clear boundaries. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a well-structured workflow for model-guided code repair with clear sequential steps and validation checkpoints. However, it lacks concrete executable examples - no actual code showing before/after repairs, no sample counterexample traces, and no specific model checker commands. The content is somewhat verbose and could benefit from splitting detailed scenarios into separate reference files.
Suggestions
Add a concrete worked example showing actual code with a violation, the counterexample trace, and the repaired code with specific line changes
Include executable commands for at least one model checker (e.g., SPIN or CBMC) showing how to run verification before and after repair
Move 'Common Scenarios' and 'Integration with Model Checkers' sections to separate reference files to reduce the main skill length
Remove explanatory phrases like 'Read all inputs carefully' and 'This skill enables' that don't add actionable value
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains some unnecessary verbosity, particularly in the overview section which restates what the skill does. The workflow steps are reasonably efficient but include explanatory text that could be trimmed, such as 'Read all inputs carefully' which Claude would naturally do. | 2 / 3 |
Actionability | The skill provides a clear process and output format, but lacks concrete executable examples. No actual code snippets showing repairs, no specific counterexample formats, and no executable commands for model checkers. The guidance is procedural but abstract. | 2 / 3 |
Workflow Clarity | The 7-step workflow is clearly sequenced with explicit outputs at each stage. Validation is explicitly addressed in step 6 with two options (re-run model checker or generate tests). The process includes feedback loops ('Iterate if needed') and clear checkpoints. | 3 / 3 |
Progressive Disclosure | The content is well-organized with clear sections, but it's a monolithic document with no references to external files for detailed scenarios or tool-specific guidance. The 'Common Scenarios' and 'Integration with Model Checkers' sections could be separate reference files. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.