tessl i github:alirezarezvani/claude-skills --skill tdd-guideComprehensive Test Driven Development guide for engineering subagents with multi-framework support, coverage analysis, and intelligent test generation
Activation
33%The description identifies its domain (TDD) and lists high-level capabilities but lacks concrete actions and explicit trigger conditions. The absence of a 'Use when...' clause significantly weakens its utility for skill selection. The language is appropriately third-person but reads more like a feature list than actionable guidance for when to invoke this skill.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios like 'Use when writing unit tests, implementing TDD workflow, checking test coverage, or when user mentions pytest, jest, or testing frameworks'
Replace abstract terms like 'intelligent test generation' with concrete actions such as 'generates test cases from function signatures, creates mock objects, writes assertion statements'
Include common user phrases and framework names as trigger terms: 'unit tests', 'integration tests', 'pytest', 'jest', 'mocha', 'test suite', 'red-green-refactor'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (TDD) and mentions some capabilities like 'multi-framework support, coverage analysis, and intelligent test generation', but these are somewhat abstract rather than concrete actions. Doesn't specify what frameworks or what 'intelligent test generation' actually does. | 2 / 3 |
Completeness | Describes what it does (TDD guide with various features) but completely lacks any 'Use when...' clause or explicit trigger guidance. There's no indication of when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Includes 'Test Driven Development', 'TDD', 'coverage analysis', and 'test generation' which are relevant terms, but misses common variations users might say like 'write tests', 'unit tests', 'testing', 'test coverage', or specific framework names. | 2 / 3 |
Distinctiveness Conflict Risk | The TDD focus and 'engineering subagents' mention provide some distinction, but 'test generation' and 'coverage analysis' could overlap with general testing skills or code quality tools. The scope is somewhat specific but not clearly bounded. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%This skill is comprehensive in scope but severely over-documented, explaining concepts Claude inherently understands (TDD basics, what coverage means, framework purposes). It reads more like documentation for human developers than actionable instructions for Claude. The lack of executable code, concrete validation steps, and excessive verbosity significantly reduce its effectiveness as a skill file.
Suggestions
Cut 70%+ of content by removing explanations of basic concepts (what TDD is, what coverage means, framework descriptions) and keeping only project-specific configurations and commands
Add executable code examples for the listed scripts (test_generator.py, coverage_analyzer.py) showing actual invocation with real inputs and expected outputs
Replace abstract workflow descriptions with concrete step-by-step commands including validation checkpoints (e.g., 'Run pytest --cov and verify output contains X before proceeding')
Split detailed content (Best Practices, Framework Integration, Limitations) into separate reference files and keep SKILL.md as a concise overview with clear links
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanations Claude already knows (what TDD is, what coverage means, framework descriptions). The 'Best Practices' and 'Limitations' sections explain basic testing concepts that don't need restating. Much content could be cut by 70%+ without losing actionable value. | 1 / 3 |
Actionability | Usage examples show invocation patterns but lack executable code. The 'Scripts' section lists modules but provides no actual implementation or commands to run them. Workflow examples are abstract descriptions rather than concrete steps with real commands. | 2 / 3 |
Workflow Clarity | Workflow sections exist but are high-level descriptions without validation checkpoints. The 'Example Workflows' section shows Input→Process→Output but lacks explicit validation steps or error recovery. No feedback loops for when test generation fails or coverage analysis produces unexpected results. | 2 / 3 |
Progressive Disclosure | Content is organized into sections but everything is inline in one massive file. References to 'Related Skills' and script modules suggest external files but don't provide clear navigation. The document would benefit from splitting detailed framework guides and best practices into separate files. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
81%| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
Total | 13 / 16 Passed | |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.