Generate tests with expert routing, framework detection, and auto-TaskCreate. Triggers on: generate tests, write tests, testgen, create test file, add test coverage.
90
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Generate comprehensive tests with automatic framework detection, expert agent routing, and project convention matching.
testgen <target> [--type] [--focus] [--depth]
│
├─→ Step 1: Analyze Target
│ ├─ File exists? → Read and parse
│ ├─ Function specified? → Extract signature
│ ├─ Directory? → List source files
│ └─ Find existing tests (avoid duplicates)
│
├─→ Step 2: Detect Framework (parallel)
│ ├─ package.json → jest/vitest/mocha/cypress/playwright
│ ├─ pyproject.toml → pytest/unittest
│ ├─ go.mod → go test
│ ├─ Cargo.toml → cargo test
│ ├─ composer.json → phpunit/pest
│ └─ Check existing test patterns
│
├─→ Step 3: Load Project Standards
│ ├─ AGENTS.md, CLAUDE.md conventions
│ ├─ Existing test file structure
│ └─ Naming conventions (*.test.ts vs *.spec.ts)
│
├─→ Step 4: Route to Expert Agent
│ ├─ .ts → typescript-expert
│ ├─ .tsx/.jsx → react-expert
│ ├─ .vue → vue-expert
│ ├─ .py → python-expert
│ ├─ .go → go-expert
│ ├─ .rs → rust-expert
│ ├─ .php → laravel-expert
│ ├─ E2E/Cypress → cypress-expert
│ ├─ Playwright → typescript-expert
│ ├─ --visual → Chrome DevTools MCP
│ └─ Multi-file → parallel expert dispatch
│
├─→ Step 5: Generate Tests
│ ├─ Create test file in correct location
│ ├─ Follow detected conventions
│ └─ Include: happy path, edge cases, error handling
│
└─→ Step 6: Integration
├─ Auto-create task (TaskCreate) for verification
└─ Suggest: run tests, /review, /save# Check if target exists
test -f "$TARGET" && echo "FILE" || test -d "$TARGET" && echo "DIRECTORY"
# For function-specific: extract signature
command -v ast-grep >/dev/null 2>&1 && ast-grep -p "function $FUNCTION_NAME" "$FILE"
# Fallback to ripgrep
rg "(?:function|const|def|public|private)\s+$FUNCTION_NAME" "$FILE" -A 10Check for existing tests:
fd -e test.ts -e spec.ts -e test.js -e spec.js | rg "$BASENAME"
fd "test_*.py" | rg "$BASENAME"JavaScript/TypeScript:
cat package.json 2>/dev/null | jq -r '.devDependencies | keys[]' | grep -E 'jest|vitest|mocha|cypress|playwright|@testing-library'Python:
grep -E "pytest|unittest|nose" pyproject.toml setup.py requirements*.txt 2>/dev/nullGo:
test -f go.mod && echo "go test available"Rust:
test -f Cargo.toml && echo "cargo test available"PHP:
cat composer.json 2>/dev/null | jq -r '.["require-dev"] | keys[]' | grep -E 'phpunit|pest|codeception'# Claude Code conventions
cat AGENTS.md 2>/dev/null | head -50
cat CLAUDE.md 2>/dev/null | head -50
# Test config files
cat jest.config.* vitest.config.* pytest.ini pyproject.toml 2>/dev/null | head -30Test location conventions:
# JavaScript
src/utils/helper.ts → src/utils/__tests__/helper.test.ts # __tests__ folder
→ src/utils/helper.test.ts # co-located
→ tests/utils/helper.test.ts # separate tests/
# Python
app/utils/helper.py → tests/test_helper.py # tests/ folder
→ tests/utils/test_helper.py # mirror structure
# Go
pkg/auth/token.go → pkg/auth/token_test.go # co-located (required)
# Rust
src/auth.rs → src/auth.rs (mod tests { ... }) # inline tests
→ tests/auth_test.rs # integration tests| File Pattern | Primary Expert | Secondary |
|---|---|---|
*.ts | typescript-expert | - |
*.tsx, *.jsx | react-expert | typescript-expert |
*.vue | vue-expert | typescript-expert |
*.py | python-expert | - |
*.go | go-expert | - |
*.rs | rust-expert | - |
*.php | laravel-expert | - |
*.cy.ts, cypress/* | cypress-expert | - |
*.spec.ts (Playwright) | typescript-expert | - |
playwright/*, e2e/* | typescript-expert | - |
*.sh, *.bash | bash-expert | - |
| (--visual flag) | Chrome DevTools MCP | typescript-expert |
Invoke via Task tool:
Task tool with subagent_type: "[detected]-expert"
Prompt includes:
- Source file content
- Function signatures to test
- Detected framework and conventions
- Requested test type and focusTest categories based on --focus:
| Focus | What to Generate |
|---|---|
happy | Normal input, expected output |
edge | Boundary values, empty inputs, nulls |
error | Invalid inputs, exceptions, error handling |
all | All of the above (default) |
Depth levels:
| Depth | Coverage |
|---|---|
quick | Happy path only, 1-2 tests per function |
normal | Happy + common edge cases (default) |
thorough | Comprehensive: all paths, mocking, async |
Auto-create task:
TaskCreate:
subject: "Run generated tests for src/auth.ts"
description: "Verify generated tests pass and review edge cases"
activeForm: "Running generated tests for auth.ts"Suggest next steps:
Tests generated: src/auth.test.ts
Next steps:
1. Run tests: npm test src/auth.test.ts
2. Review and refine edge cases
3. Use /save to persist tasks across sessions[]struct pattern)testing.T and subtests (t.Run)testing.B)t.Parallel())#[test] attribute functions#[cfg(test)] module organization#[should_panic] for error testing| Tool | Purpose | Fallback |
|---|---|---|
jq | Parse package.json | Read tool |
rg | Find existing tests | Grep tool |
ast-grep | Parse function signatures | ripgrep patterns |
fd | Find test files | Glob tool |
| Chrome DevTools MCP | Visual testing (--visual) | Playwright/Cypress |
Graceful degradation:
command -v jq >/dev/null 2>&1 && cat package.json | jq '.devDependencies' || cat package.jsonFor framework-specific code examples, see:
frameworks.md - Complete test examples for all supported languagesvisual-testing.md - Chrome DevTools integration for --visual flag| Command | Relationship |
|---|---|
/review | Review generated tests before committing |
/explain | Understand complex code before testing |
/save | Track test coverage goals |
5c15b3d
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.