Master Test-Driven Development with deterministic red-green-refactor workflows, test-first feature delivery, bug reproduction through failing tests, behavior-focused assertions, and refactoring safety; use when implementing new functions, changing APIs, fixing regressions, or restructuring code under test.
Does it follow best practices?
Evaluation — 86%
↑ 1.05xAgent success when using this tile
Validation for skill structure
Always run your new test and watch it fail before writing implementation code. A test that passes immediately either tests nothing meaningful or the feature already exists.
Incorrect (never seeing red):
// Write test
test('validates email format', () => {
expect(isValidEmail('user@example.com')).toBe(true)
})
// Immediately write implementation without running test
function isValidEmail(email: string): boolean {
return /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email)
}
// Run tests - passes, but did the test ever fail?
// Could be testing the wrong function or have a typoCorrect (verify RED before GREEN):
// Step 1: Write test
test('validates email format', () => {
expect(isValidEmail('user@example.com')).toBe(true)
})
// Step 2: Run test - see it fail
// Error: isValidEmail is not defined
// This confirms the test is wired up correctly
// Step 3: Write minimal stub
function isValidEmail(email: string): boolean {
return false
}
// Step 4: Run test - see it fail with correct assertion
// Expected: true, Received: false
// This confirms the assertion is testing the right thing
// Step 5: Implement
function isValidEmail(email: string): boolean {
return /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email)
}
// Step 6: Run test - see it pass (GREEN)Why this matters:
Reference: Test Driven Development - Martin Fowler
Install with Tessl CLI
npx tessl i pantheon-ai/test-driven-development@0.2.4evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
references