CtrlK
BlogDocsLog inGet started
Tessl Logo

testgen

Generate tests with expert routing, framework detection, and auto-TaskCreate. Triggers on: generate tests, write tests, testgen, create test file, add test coverage.

73

1.07x
Quality

62%

Does it follow best practices?

Impact

87%

1.07x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./data/skills-md/0xdarkmatter/claude-mods/testgen/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is functional with strong trigger terms and clear 'when' guidance via the explicit 'Triggers on:' clause. Its main weakness is that the capability descriptions ('expert routing', 'framework detection', 'auto-TaskCreate') are somewhat jargon-heavy and don't clearly convey concrete actions to the user. More specific action descriptions would improve clarity.

Suggestions

Replace jargon like 'expert routing' and 'auto-TaskCreate' with concrete action descriptions such as 'generates unit and integration tests, detects testing frameworks like Jest/pytest/JUnit, and creates test files matching project conventions'.

DimensionReasoningScore

Specificity

Names the domain (test generation) and mentions some capabilities like 'expert routing', 'framework detection', and 'auto-TaskCreate', but these are somewhat jargon-heavy and don't clearly describe concrete user-facing actions like 'generates unit tests for functions' or 'creates integration test files'.

2 / 3

Completeness

Answers both 'what' (generate tests with expert routing, framework detection, auto-TaskCreate) and 'when' (explicit 'Triggers on:' clause with specific trigger phrases), satisfying the requirement for explicit trigger guidance.

3 / 3

Trigger Term Quality

Explicitly lists natural trigger terms users would say: 'generate tests', 'write tests', 'testgen', 'create test file', 'add test coverage'. These are realistic phrases a user would naturally use when requesting test generation.

3 / 3

Distinctiveness Conflict Risk

Clearly scoped to test generation with distinct trigger terms. Unlikely to conflict with other skills since the triggers are specific to testing workflows and the description focuses narrowly on test creation.

3 / 3

Total

11

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill attempts to be comprehensive but suffers from significant verbosity, explaining many patterns and conventions Claude already knows (pytest fixtures, Go table-driven tests, Rust #[test] attributes). The core test generation step—the most important part—is ironically the least concrete, offering only category/depth tables rather than executable examples. The workflow lacks validation checkpoints for verifying generated tests compile and pass.

Suggestions

Cut the 'Expert Routing Details' section entirely or move it to a reference file—Claude already knows basic testing patterns for each language.

Add concrete, executable test generation examples in Step 5 showing actual generated test output for at least 2 languages, rather than just category/depth tables.

Add a validation step between generation and integration: run the generated tests, check for compilation/import errors, and fix before suggesting next steps.

Consolidate the architecture diagram and execution steps to eliminate duplication—the same information is presented twice in different formats.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~250+ lines. It explains routing tables, CLI tool fallbacks, and framework detection details that Claude can infer or discover at runtime. The architecture diagram, while visually appealing, duplicates information that's then repeated in the execution steps. Much of the expert routing details section lists basic language testing patterns Claude already knows (e.g., pytest fixtures, table-driven Go tests, #[test] attributes).

1 / 3

Actionability

The skill provides concrete bash commands for framework detection and file discovery, and references specific tools. However, the actual test generation step (Step 5) is surprisingly vague—it lists categories and depth levels in tables but provides no executable code examples of generated tests. The 'Route to Expert Agent' step references a 'Task tool' with subagent_type but doesn't show a concrete, copy-paste-ready invocation.

2 / 3

Workflow Clarity

The 6-step architecture is clearly sequenced and the flow diagram is helpful. However, there are no validation checkpoints—no step verifies that generated tests actually compile or pass before suggesting next steps. For a skill that generates code files, a validate-then-fix feedback loop is essential but missing, which should cap this at 2.

2 / 3

Progressive Disclosure

The skill references external files (frameworks.md, visual-testing.md) for detailed examples, which is good progressive disclosure. However, the main file itself is a monolithic wall containing extensive expert routing details, CLI tool tables, and test location conventions that could be split into reference files. The inline content is too heavy for an overview document.

2 / 3

Total

7

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
NeverSight/skills_feed
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.