CtrlK
BlogDocsLog inGet started
Tessl Logo

tlaplus-guided-code-repair

TLA+-specific instance of model-guided repair — reads a TLC error trace, identifies the enabling condition that should have been false, strengthens the corresponding action, and maps the fix to source code. Use when TLC reports an invariant violation or deadlock and you have the code-to-TLA+ mapping from extraction.

Install with Tessl CLI

npx tessl i github:santosomar/general-secure-coding-agent-skills --skill tlaplus-guided-code-repair
What are skills?

90

Quality

88%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SKILL.md
Review
Evals

Discovery

85%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted technical skill description that clearly articulates specific actions and explicit trigger conditions. The description excels at specificity and completeness with a clear 'Use when...' clause. The main weakness is that trigger terms are heavily technical jargon, which may be appropriate for this specialized domain but could miss some natural user phrasings.

Suggestions

Consider adding more natural trigger phrases users might say, such as 'model checking failed', 'TLA+ spec error', or 'fix specification bug' to improve discoverability.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'reads a TLC error trace', 'identifies the enabling condition that should have been false', 'strengthens the corresponding action', and 'maps the fix to source code'. These are precise, technical operations.

3 / 3

Completeness

Clearly answers both what (reads TLC error trace, identifies enabling conditions, strengthens actions, maps to source code) AND when ('Use when TLC reports an invariant violation or deadlock and you have the code-to-TLA+ mapping from extraction').

3 / 3

Trigger Term Quality

Includes relevant technical terms like 'TLC error trace', 'invariant violation', 'deadlock', and 'TLA+', but these are domain-specific jargon. Missing more natural variations users might say like 'model checking error', 'spec violation', or 'fix my TLA+ spec'.

2 / 3

Distinctiveness Conflict Risk

Highly distinctive with clear niche: TLA+/TLC-specific repair with explicit prerequisites (code-to-TLA+ mapping from extraction). Very unlikely to conflict with other skills due to the specialized domain and explicit dependency chain.

3 / 3

Total

11

/

12

Passed

Implementation

92%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a high-quality, specialized skill that provides excellent actionable guidance for TLC-specific model-guided repair. The worked example is particularly strong, showing the complete flow from TLC trace to code fix. The content respects Claude's intelligence while providing the domain-specific knowledge needed for this specialized task.

DimensionReasoningScore

Conciseness

The content is lean and efficient, assuming Claude understands TLA+ and TLC fundamentals. No unnecessary explanations of basic concepts; every section provides actionable, domain-specific guidance that Claude wouldn't inherently know.

3 / 3

Actionability

Provides concrete TLA+ code examples, a complete worked example with before/after fixes for both TLA+ and Python code, and a structured output format. The two-phase commit example is fully executable and demonstrates the complete repair workflow.

3 / 3

Workflow Clarity

Clear multi-step process: read trace → identify culprit → extract missing guard → fix TLA+ → re-check (with explicit pass/fail validation) → map to code. The 'Re-check' step explicitly validates invariant and deadlock, providing the feedback loop needed for this type of operation.

3 / 3

Progressive Disclosure

References parent skill 'model-guided-code-repair' appropriately, but the content is somewhat monolithic. The table of repair patterns by violation type is helpful, but the worked example section is lengthy and could potentially be split into a separate file for complex scenarios.

2 / 3

Total

11

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.