Universal coding patterns, constraints, TDD workflow, atomic todos
49
38%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/base/SKILL.mdComplexity is the enemy. Every line of code is a liability. The goal is software simple enough that any engineer (or AI) can understand the entire system in one session.
These limits apply to every file created or modified.
Before completing ANY file:
If limits are exceeded during development:
⚠️ FILE SIZE VIOLATION DETECTED
[filename] has [X] lines (limit: 200)
Splitting into:
- [filename-a].ts - [responsibility A]
- [filename-b].ts - [responsibility B]Never defer refactoring. Fix violations immediately, not "later".
Every project must have clear separation between code docs and project specs:
project/
├── docs/ # Code documentation
│ ├── architecture.md # System design decisions
│ ├── api.md # API reference (if applicable)
│ └── setup.md # Development setup guide
├── _project_specs/ # Project specifications
│ ├── overview.md # Project vision and goals
│ ├── features/ # Feature specifications
│ │ ├── feature-a.md
│ │ └── feature-b.md
│ ├── todos/ # Atomic todos tracking
│ │ ├── active.md # Current sprint/focus
│ │ ├── backlog.md # Future work
│ │ └── completed.md # Done items (for reference)
│ ├── session/ # Session state (see session-management.md)
│ │ ├── current-state.md # Live session state
│ │ ├── decisions.md # Key decisions log
│ │ ├── code-landmarks.md # Important code locations
│ │ └── archive/ # Past session summaries
│ └── prompts/ # LLM prompt specifications (if AI-first)
└── CLAUDE.md # Claude instructions (references skills)| Location | Content |
|---|---|
docs/ | Technical documentation, API refs, setup guides |
_project_specs/ | Business logic, features, requirements, todos |
_project_specs/session/ | Session state, decisions, context for resumability |
CLAUDE.md | Claude-specific instructions and skill references |
All work is tracked as atomic todos with validation and test criteria.
## [TODO-001] Short descriptive title
**Status:** pending | in-progress | blocked | done
**Priority:** high | medium | low
**Estimate:** XS | S | M | L | XL
### Description
One paragraph describing what needs to be done.
### Acceptance Criteria
- [ ] Criterion 1 - specific, measurable
- [ ] Criterion 2 - specific, measurable
### Validation
How to verify this is complete:
- Manual: [steps to manually test]
- Automated: [test file/command that validates this]
### Test Cases
| Input | Expected Output | Notes |
|-------|-----------------|-------|
| ... | ... | ... |
### Dependencies
- Depends on: [TODO-xxx] (if any)
- Blocks: [TODO-yyy] (if any)
### TDD Execution Log
| Phase | Command | Result | Timestamp |
|-------|---------|--------|-----------|
| RED | `[test command]` | - | - |
| GREEN | `[test command]` | - | - |
| VALIDATE | `[lint && typecheck && test --coverage]` | - | - |
| COMPLETE | Moved to completed.md | - | - |Every todo MUST follow this exact workflow. No exceptions.
┌─────────────────────────────────────────────────────────────┐
│ 1. RED: Write Tests First │
│ └─ Create test file(s) based on Test Cases table │
│ └─ Tests should cover all acceptance criteria │
│ └─ Run tests → ALL MUST FAIL (proves tests are valid) │
├─────────────────────────────────────────────────────────────┤
│ 2. GREEN: Implement the Feature │
│ └─ Write minimum code to make tests pass │
│ └─ Follow simplicity rules (20 lines/function, etc.) │
│ └─ Run tests → ALL MUST PASS │
├─────────────────────────────────────────────────────────────┤
│ 3. VALIDATE: Quality Gates │
│ └─ Run linter (auto-fix if possible) │
│ └─ Run type checker (tsc/mypy/pyright) │
│ └─ Run full test suite with coverage │
│ └─ Verify coverage threshold (≥80%) │
├─────────────────────────────────────────────────────────────┤
│ 4. COMPLETE: Mark Done │
│ └─ Only after ALL validations pass │
│ └─ Move todo to completed.md │
│ └─ Checkpoint session state │
└─────────────────────────────────────────────────────────────┘Node.js/TypeScript:
# 1. RED - Run tests (expect failures)
npm test -- --grep "todo-description"
# 2. GREEN - Run tests (expect pass)
npm test -- --grep "todo-description"
# 3. VALIDATE - Full quality check
npm run lint && npm run typecheck && npm test -- --coveragePython:
# 1. RED - Run tests (expect failures)
pytest -k "todo_description" -v
# 2. GREEN - Run tests (expect pass)
pytest -k "todo_description" -v
# 3. VALIDATE - Full quality check
ruff check . && mypy . && pytest --cov --cov-fail-under=80React/Next.js:
# 1. RED - Run tests (expect failures)
npm test -- --testPathPattern="ComponentName"
# 2. GREEN - Run tests (expect pass)
npm test -- --testPathPattern="ComponentName"
# 3. VALIDATE - Full quality check
npm run lint && npm run typecheck && npm test -- --coverage --watchAll=falseNEVER mark a todo as complete if:
If blocked by failures:
## [TODO-042] - BLOCKED
**Blocking Reason:** [Lint error in X / Test failure in Y / Coverage at 75%]
**Action Required:** [Specific fix needed]When a user reports a bug, NEVER jump to fixing it directly.
┌─────────────────────────────────────────────────────────────┐
│ 1. DIAGNOSE: Identify the Test Gap │
│ └─ Run existing tests - do any fail? │
│ └─ If tests pass but bug exists → tests are incomplete │
│ └─ Document: "Test gap: [what was missed]" │
├─────────────────────────────────────────────────────────────┤
│ 2. RED: Write a Failing Test for the Bug │
│ └─ Create test that reproduces the exact bug │
│ └─ Test should FAIL with current code │
│ └─ This proves the test catches the bug │
├─────────────────────────────────────────────────────────────┤
│ 3. GREEN: Fix the Bug │
│ └─ Write minimum code to make the test pass │
│ └─ Run test → must PASS now │
├─────────────────────────────────────────────────────────────┤
│ 4. VALIDATE: Full Quality Check │
│ └─ Run ALL tests (not just the new one) │
│ └─ Run linter and type checker │
│ └─ Verify no regression in coverage │
└─────────────────────────────────────────────────────────────┘## [BUG-001] Short description of the bug
**Status:** pending
**Priority:** high
**Reported:** [how user reported it / reproduction steps]
### Bug Description
What is happening vs. what should happen.
### Reproduction Steps
1. Step one
2. Step two
3. Observe: [incorrect behavior]
4. Expected: [correct behavior]
### Test Gap Analysis
- Existing test coverage: [list relevant test files]
- Gap identified: [what the tests missed]
- New test needed: [describe the test to add]
### Test Cases for Bug
| Input | Current (Bug) | Expected (Fixed) |
|-------|---------------|------------------|
| ... | ... | ... |
### TDD Execution Log
| Phase | Command | Result | Timestamp |
|-------|---------|--------|-----------|
| DIAGNOSE | `npm test` | All pass (gap!) | - |
| RED | `npm test -- --grep "bug description"` | 1 test failed ✓ | - |
| GREEN | `npm test -- --grep "bug description"` | 1 test passed ✓ | - |
| VALIDATE | `npm run lint && npm run typecheck && npm test -- --coverage` | Pass ✓ | - |## [TODO-042] Add email validation to signup form
**Status:** pending
**Priority:** high
**Estimate:** S
### Description
Validate email format on the signup form before submission. Show inline error if invalid.
### Acceptance Criteria
- [ ] Email field shows error for invalid format
- [ ] Error clears when user fixes the email
- [ ] Form cannot submit with invalid email
- [ ] Valid emails pass through without error
### Validation
- Manual: Enter "notanemail" in signup form, verify error appears
- Automated: `npm test -- --grep "email validation"`
### Test Cases
| Input | Expected Output | Notes |
|-------|-----------------|-------|
| user@example.com | Valid, no error | Standard email |
| user@sub.example.com | Valid, no error | Subdomain |
| notanemail | Invalid, show error | No @ symbol |
| user@ | Invalid, show error | No domain |
| @example.com | Invalid, show error | No local part |
### Dependencies
- Depends on: [TODO-041] Signup form component
- Blocks: [TODO-045] Signup flow integration test
### TDD Execution Log
| Phase | Command | Result | Timestamp |
|-------|---------|--------|-----------|
| RED | `npm test -- --grep "email validation"` | 5 tests failed ✓ | - |
| GREEN | `npm test -- --grep "email validation"` | 5 tests passed ✓ | - |
| VALIDATE | `npm run lint && npm run typecheck && npm test -- --coverage` | Pass, 84% coverage ✓ | - |
| COMPLETE | Moved to completed.md | ✓ | - |When a project needs API keys, always ask the user for their centralized access file first.
1. Ask: "Do you have an access keys file? (e.g., ~/Documents/Access.txt)"
2. Read and parse the file for known key patterns
3. Validate keys are working
4. Create project .env with found keys
5. Report missing keys and where to get them| Service | Pattern | Env Variable |
|---|---|---|
| OpenAI | sk-proj-* | OPENAI_API_KEY |
| Claude | sk-ant-* | ANTHROPIC_API_KEY |
| Render | rnd_* | RENDER_API_KEY |
| Replicate | r8_* | REPLICATE_API_TOKEN |
| client_id + secret | REDDIT_CLIENT_ID, REDDIT_CLIENT_SECRET |
See credentials.md for full parsing logic and validation commands.
Every project must meet these security requirements. See security.md skill for detailed patterns.
.env in .gitignore - Always, no exceptionsVITE_*, NEXT_PUBLIC_* for secrets.gitignore with secrets patterns.env.example with all required vars (no values)scripts/security-check.sh for pre-commit validationEvery PR must pass:
All projects must have pre-commit hooks that run:
This catches issues before they hit CI, saving time and keeping the main branch clean.
Maintain context for resumability. See session-management.md for full details.
After completing any task, ask:
_project_specs/session/decisions.mdcurrent-state.mdsession/archive/current-state.md_project_specs/session/current-state.md_project_specs/todos/active.mdcurrent-state.md with handoff notesWhen implementing features (following TDD):
TDD is non-negotiable. Tests must exist and fail before any implementation begins.
When you notice code violating these rules, stop and refactor before continuing.
The Stop hook in .claude/settings.json runs tests after each response. If tests fail, the failure output is fed back to Claude automatically. No manual intervention needed.
See the iterative-development skill for setup details.
| Task Type | TDD Loop? |
|---|---|
| New feature | Yes - tests run after each response |
| Bug fix | Yes - write failing test first |
| Refactoring | Yes - existing tests catch regressions |
| Simple question/explanation | No - no code changes |
| One-line fix | No - trivial change |
d4ddb03
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.