CtrlK
BlogDocsLog inGet started
Tessl Logo

assertion-synthesizer

Generate test assertions from existing code implementation. Use when the user has implementation code without tests or incomplete test coverage, and needs assertions synthesized by analyzing the code's behavior, inputs, outputs, and state changes. Supports Python (pytest/unittest), Java (JUnit/AssertJ), and JavaScript/TypeScript (Jest/Chai). Handles equality checks, collections, exceptions, and state verification.

Install with Tessl CLI

npx tessl i github:ArabelaTso/Skills-4-SE --skill assertion-synthesizer
What are skills?

86

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly articulates what the skill does (generate test assertions by analyzing code behavior), when to use it (incomplete test coverage scenarios), and provides specific technical context (supported languages, frameworks, and assertion types). It uses proper third-person voice throughout and includes natural trigger terms users would actually say.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Generate test assertions', 'analyzing the code's behavior, inputs, outputs, and state changes', and specifies supported frameworks (pytest/unittest, JUnit/AssertJ, Jest/Chai) and assertion types (equality checks, collections, exceptions, state verification).

3 / 3

Completeness

Clearly answers both what ('Generate test assertions from existing code implementation') and when ('Use when the user has implementation code without tests or incomplete test coverage') with explicit trigger conditions.

3 / 3

Trigger Term Quality

Includes natural keywords users would say: 'test assertions', 'tests', 'test coverage', 'implementation code', plus specific framework names (pytest, unittest, JUnit, Jest, Chai) and language names (Python, Java, JavaScript, TypeScript) that users would naturally mention.

3 / 3

Distinctiveness Conflict Risk

Has a clear niche focused specifically on generating test assertions from existing code, distinct from general testing skills or code generation skills. The specific focus on assertion synthesis from implementation analysis creates a unique trigger profile.

3 / 3

Total

12

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides strong actionable guidance with excellent multi-language code examples that are immediately executable. However, it's verbose with some unnecessary explanations of basic concepts, and the workflow lacks explicit validation steps for verifying generated assertions work correctly. The content would benefit from being split across multiple files for better progressive disclosure.

Suggestions

Remove or condense the workflow analysis steps (1-2) that explain concepts Claude already understands, such as identifying function signatures and logic paths

Add explicit validation steps to the workflow, such as 'Run generated tests to verify they pass' and 'Intentionally break code to confirm assertions catch failures'

Split language-specific patterns into separate reference files (e.g., PYTHON.md, JAVA.md, JAVASCRIPT.md) and link from the main skill

Condense the 'Best Practices' section which contains generic testing advice Claude already knows

DimensionReasoningScore

Conciseness

The skill is moderately efficient but includes some unnecessary explanation. The workflow section explains concepts Claude already knows (like what function signatures are), and some sections like 'Best Practices' contain generic testing advice that doesn't add unique value.

2 / 3

Actionability

Excellent actionability with fully executable code examples across multiple languages (Python pytest/unittest, Java JUnit, JavaScript Jest). All examples are copy-paste ready with clear input/output demonstrations and concrete assertion patterns.

3 / 3

Workflow Clarity

The workflow steps are listed but lack validation checkpoints. For a skill involving test generation, there's no explicit verification step to confirm generated assertions actually pass or catch intended behaviors. The 'Verify Coverage' step is vague rather than actionable.

2 / 3

Progressive Disclosure

Content is reasonably organized with clear sections, but the skill is monolithic at ~300 lines. Language-specific patterns and synthesis strategies could be split into separate reference files. No external file references are provided for advanced topics.

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.