Comprehensive Test Driven Development guide for engineering subagents with multi-framework support, coverage analysis, and intelligent test generation
Install with Tessl CLI
npx tessl i github:alirezarezvani/claude-code-skill-factory --skill tdd-guide47
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies its domain (TDD for engineering subagents) and lists high-level capabilities, but lacks concrete action verbs and completely omits trigger guidance. The absence of a 'Use when...' clause makes it difficult for Claude to know when to select this skill, and the capabilities listed are more like feature categories than specific actions.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios like 'Use when writing unit tests, implementing TDD workflow, checking test coverage, or when user mentions testing, mocks, or test frameworks'
Replace abstract terms with concrete actions: instead of 'intelligent test generation', use 'generates unit tests, creates test fixtures, writes mock implementations'
Include specific framework names users might mention (Jest, pytest, JUnit, etc.) to improve trigger term coverage
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (TDD, testing) and mentions some capabilities like 'multi-framework support, coverage analysis, and intelligent test generation', but these are somewhat abstract rather than concrete actions. Missing specific verbs like 'generates unit tests', 'runs coverage reports', 'creates test fixtures'. | 2 / 3 |
Completeness | Describes what the skill does (TDD guide with various features) but completely lacks a 'Use when...' clause or any explicit trigger guidance. No indication of when Claude should select this skill over others. | 1 / 3 |
Trigger Term Quality | Includes relevant terms like 'Test Driven Development', 'TDD', 'coverage analysis', 'test generation', but missing common user variations like 'unit tests', 'write tests', 'testing', 'mocks', 'assertions', or specific framework names users might mention. | 2 / 3 |
Distinctiveness Conflict Risk | The TDD focus and 'engineering subagents' context provides some distinction, but 'test generation' and 'coverage analysis' could overlap with general coding skills or other testing-related skills. Not clearly scoped to specific languages or scenarios. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is comprehensive in scope but severely over-documented, explaining concepts Claude inherently understands (TDD basics, what coverage means, framework purposes). It provides structure and organization but lacks concrete, executable examples - the usage patterns show invocation syntax without actual working code. The skill would be significantly more effective at 20% of its current length with real code examples.
Suggestions
Cut 70%+ of content by removing explanations of basic concepts (what TDD is, what coverage means, framework descriptions) and keeping only project-specific configurations and patterns
Replace abstract workflow descriptions with concrete, executable code examples showing actual script invocations and expected outputs
Add validation checkpoints to workflows (e.g., 'If coverage report parsing fails, check format with: python format_detector.py report.lcov')
Split detailed framework-specific guides and best practices into separate referenced files, keeping SKILL.md as a concise overview
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanations Claude already knows (what TDD is, what coverage means, framework descriptions). The 'Best Practices' and 'Limitations' sections explain basic testing concepts that don't need restating. Much content could be cut by 70%+ without losing actionable value. | 1 / 3 |
Actionability | Usage examples show invocation patterns but lack executable code. The 'Scripts' section lists modules without showing how to actually use them. Workflow examples are abstract descriptions rather than concrete commands or code snippets. | 2 / 3 |
Workflow Clarity | Workflow sections exist but are high-level descriptions without validation checkpoints. The 'Example Workflows' show Input→Process→Output but lack explicit validation steps or error recovery. No feedback loops for when test generation fails or coverage analysis produces unexpected results. | 2 / 3 |
Progressive Disclosure | Content is organized into sections but everything is inline in one massive file. References to 'Related Skills' and script modules exist but no actual links to separate documentation. The document would benefit from splitting detailed framework guides and best practices into separate files. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.