CtrlK
BlogDocsLog inGet started
Tessl Logo

directed-test-input-generator

Generate targeted test inputs to reach specific code paths and hard-to-reach behaviors in Python code. Use when: (1) Targeting uncovered branches or specific execution paths, (2) Need coverage-guided test generation, (3) Want to leverage LLM understanding of code semantics for meaningful test inputs, (4) Testing boundary conditions and edge cases systematically, (5) Combining symbolic reasoning with fuzzing. Provides path analysis, constraint solving, coverage-guided strategies, and LLM-driven semantic generation for comprehensive test input creation.

91

1.28x
Quality

92%

Does it follow best practices?

Impact

82%

1.28x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that excels across all dimensions. It provides specific concrete actions, includes natural trigger terms that users would actually say, explicitly lists when to use the skill with numbered scenarios, and carves out a distinct niche in test input generation that won't conflict with general testing or code analysis skills.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Generate targeted test inputs', 'path analysis', 'constraint solving', 'coverage-guided strategies', 'LLM-driven semantic generation'. Clearly describes concrete capabilities for Python code testing.

3 / 3

Completeness

Clearly answers both what (generate test inputs, path analysis, constraint solving, coverage-guided strategies) AND when with explicit numbered triggers (targeting uncovered branches, coverage-guided generation, boundary conditions, etc.).

3 / 3

Trigger Term Quality

Includes natural keywords users would say: 'test inputs', 'code paths', 'uncovered branches', 'coverage', 'edge cases', 'boundary conditions', 'fuzzing', 'Python code'. Good coverage of testing terminology.

3 / 3

Distinctiveness Conflict Risk

Clear niche focused on test input generation for code coverage, distinct from general testing skills or code analysis. Specific triggers like 'coverage-guided', 'constraint solving', 'fuzzing' create a unique profile unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured skill with excellent actionability and workflow clarity. The code examples are executable and comprehensive, covering multiple use cases. The main weakness is some verbosity in explanations and examples that could be tightened, particularly in the coverage-guided exploration section which walks through iterations in excessive detail.

Suggestions

Condense the coverage-guided exploration example (Use Case 3) - the iteration-by-iteration walkthrough is overly verbose; a single before/after example would suffice

Remove explanatory phrases like 'Use LLM understanding of code semantics' - Claude already knows what LLMs do

DimensionReasoningScore

Conciseness

The skill is reasonably efficient but includes some unnecessary verbosity, such as explaining what LLM semantic understanding means and providing overly detailed iteration-by-iteration examples in Use Case 3 that could be condensed.

2 / 3

Actionability

Provides fully executable Python code examples throughout, with concrete imports, function calls, and expected outputs. The code is copy-paste ready and demonstrates real usage patterns.

3 / 3

Workflow Clarity

Multi-step processes are clearly sequenced with numbered steps, the hybrid approach shows explicit ordering, and the coverage-guided testing includes clear iteration patterns with feedback loops for refinement.

3 / 3

Progressive Disclosure

Well-structured with a clear overview, quick start, and progressive depth. References to detailed documentation (coverage_strategies.md, llm_patterns.md) are clearly signaled and one level deep.

3 / 3

Total

11

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ArabelaTso/Skills-4-SE
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.