CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl-labs/skill-discovery

Rules, hooks, and skills that help agents find practice guidance before coding. Gap analysis across practice and technology domains, registry search to fill gaps, adaptive re-checking on a backoff schedule, and verification strategy (tests, type checking, linting) as continuous feedback loops.

92

Quality

92%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Overview
Quality
Evals
Security
Files

SKILL.mdskills/verification-strategy/

name:
verification-strategy
description:
Set up self-verification before building features — test runner, type checking, linting, and feedback loops that let the agent confirm its own work. Use when starting a new project, setting up a codebase for the first time, or when the user asks "how will you test this", "set up testing", "make sure this works", or "verify your work". Run this BEFORE writing feature code.
metadata:
{"author":"tessl-labs","version":"0.1.0","tags":["testing","verification","feedback","quality","self-check"]}

Verification Strategy

Set up feedback mechanisms BEFORE building features. Every piece of code you write should be verifiable.

Why This Matters for Agents

Without verification, an agent builds code and hopes it works. With verification, the agent builds code, runs a check, sees whether it worked, and fixes problems immediately. This is the difference between shipping bugs and shipping working software.

Verification is not an afterthought. Set it up first, use it continuously.

Step 1 — Set Up Verification Infrastructure

Before writing any feature code, configure these feedback mechanisms. Each one catches a different class of bug:

Type checking (catches: wrong types, missing fields, undefined values)

# TypeScript
npx tsc --noEmit                    # Check types without building

# Python
pip install mypy
mypy app/                            # Check type annotations

Add to package.json / pyproject.toml:

{ "scripts": { "typecheck": "tsc --noEmit" } }

Linting (catches: convention violations, unused imports, potential bugs)

# TypeScript
npm install -D eslint
npx eslint --init

# Python
pip install ruff
ruff check app/

Test runner (catches: logic errors, broken endpoints, regressions)

# TypeScript
npm install -D vitest supertest @types/supertest
# Python
pip install pytest httpx pytest-asyncio

Configure so the app can be imported by tests without calling .listen():

// Export app separately from server startup
export const app = express();
// ... configure routes ...
if (process.env.NODE_ENV !== 'test') {
  app.listen(PORT);
}

Build check (catches: import errors, missing modules, syntax errors)

# TypeScript
npx tsc          # or: npx vite build
# Python
python -c "from app.main import app"   # Verify imports resolve

Do this now. Create the config files, install the packages, run each check once to confirm it works. Only then proceed to features.

Step 2 — Write Tests Alongside Features, Not After

Every feature you build should have a verification step immediately after:

After building...Verify with...
Database schema + seed dataQuery the DB and check data exists
First API endpointWrite an API test that calls it and checks the response
Order creation flowWrite a test: POST order, GET it back, verify persistence
Status updateWrite a test: create order, PATCH status, verify it changed
Frontend pageRun the build to catch import errors, then manual smoke test
Error handlingWrite a test: send bad data, verify 400 response format

Pattern: Build → Test → Fix → Commit

1. Write the database layer
2. Write a test that queries it → run → confirm it passes
3. Write the first endpoint
4. Write a test that calls it → run → confirm it passes
5. Fix anything that fails
6. Commit working code with passing tests

Do NOT build the entire app and then write tests at the end. By then you don't know what's broken.

Step 3 — Run Verification Before Every Commit

Before committing, run all checks:

# TypeScript project
npm run typecheck && npm run lint && npm test && npm run build

# Python project
mypy app/ && ruff check app/ && pytest && python -c "from app.main import app"

If any check fails, fix it before committing. Never commit code you haven't verified.

Step 4 — Use Verification Results as Feedback

When a test fails, that's information:

  • Type error → you're passing the wrong shape. Fix the type or the caller.
  • Test assertion failure → the endpoint returns something unexpected. Check your logic.
  • Import error → you referenced something that doesn't exist. Fix the import.
  • Lint error → you violated a convention. Fix the style.

Don't just fix the test to make it pass — understand why it failed and fix the root cause.

The 5 Essential API Tests

At minimum, every API should have these 5 tests (they catch the most bugs):

  1. Happy path — GET main endpoint, verify 200 with data
  2. Validation — POST with bad data, verify 400 with error message
  3. Not found — GET nonexistent resource, verify 404 (not 500)
  4. Persistence — POST data, GET it back, verify it persisted
  5. Error format — all errors return consistent {error: {code, message}} shape

Search the Tessl registry for framework-specific testing skills that provide concrete patterns:

tessl search --type skills "<your-framework> testing"

Step 5 — Pre-Commit Skill Review

Before your final commit, review every installed skill's checklist. This is the most commonly skipped step — agents install skills, read them, build features, then commit without checking whether they actually applied the guidance.

Go through each installed tile in .tessl/tiles/ and check:

Skill typeCheck this is in your code
Error handlingError middleware/exception handlers registered, custom error classes used
SecurityHelmet/Talisman/headers middleware installed AND configured
LoggingStructured logger (pino/structlog) installed AND used in routes — no console.log/print
Health checksGET /health endpoint exists and returns 200
TestingTest files exist, tests run and pass
AccessibilityARIA attributes on interactive elements, labels on all inputs
DatabaseWAL mode enabled, foreign keys ON, indexes on FK columns
ConfigurationPort + DB path from env vars, not hardcoded

If any installed skill's core pattern is missing from your code, add it now. An installed skill that isn't applied is wasted.

Checklist

Before writing features:

  • Type checking configured and passing on empty project
  • Linter configured
  • Test runner installed and configured
  • App exportable for testing (no .listen() on import)

With every feature:

  • Test written alongside the feature (not after)
  • Test run and passing before moving on
  • Type check still passes

Before committing (CRITICAL):

  • Every installed skill's core pattern is present in the code
  • All tests pass
  • Type check passes
  • No console.log/print if structured logging skill is installed
  • No missing middleware if error/security skills are installed

Before every commit:

  • All tests pass
  • Type check passes
  • Linter passes
  • Build succeeds

skills

verification-strategy

tile.json