Rules, hooks, and skills that help agents find practice guidance before coding. Gap analysis across practice and technology domains, registry search to fill gaps, adaptive re-checking on a backoff schedule, and verification strategy (tests, type checking, linting) as continuous feedback loops.
92
92%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Set up feedback mechanisms BEFORE building features. Every piece of code you write should be verifiable.
Without verification, an agent builds code and hopes it works. With verification, the agent builds code, runs a check, sees whether it worked, and fixes problems immediately. This is the difference between shipping bugs and shipping working software.
Verification is not an afterthought. Set it up first, use it continuously.
Before writing any feature code, configure these feedback mechanisms. Each one catches a different class of bug:
# TypeScript
npx tsc --noEmit # Check types without building
# Python
pip install mypy
mypy app/ # Check type annotationsAdd to package.json / pyproject.toml:
{ "scripts": { "typecheck": "tsc --noEmit" } }# TypeScript
npm install -D eslint
npx eslint --init
# Python
pip install ruff
ruff check app/# TypeScript
npm install -D vitest supertest @types/supertest
# Python
pip install pytest httpx pytest-asyncioConfigure so the app can be imported by tests without calling .listen():
// Export app separately from server startup
export const app = express();
// ... configure routes ...
if (process.env.NODE_ENV !== 'test') {
app.listen(PORT);
}# TypeScript
npx tsc # or: npx vite build
# Python
python -c "from app.main import app" # Verify imports resolveDo this now. Create the config files, install the packages, run each check once to confirm it works. Only then proceed to features.
Every feature you build should have a verification step immediately after:
| After building... | Verify with... |
|---|---|
| Database schema + seed data | Query the DB and check data exists |
| First API endpoint | Write an API test that calls it and checks the response |
| Order creation flow | Write a test: POST order, GET it back, verify persistence |
| Status update | Write a test: create order, PATCH status, verify it changed |
| Frontend page | Run the build to catch import errors, then manual smoke test |
| Error handling | Write a test: send bad data, verify 400 response format |
1. Write the database layer
2. Write a test that queries it → run → confirm it passes
3. Write the first endpoint
4. Write a test that calls it → run → confirm it passes
5. Fix anything that fails
6. Commit working code with passing testsDo NOT build the entire app and then write tests at the end. By then you don't know what's broken.
Before committing, run all checks:
# TypeScript project
npm run typecheck && npm run lint && npm test && npm run build
# Python project
mypy app/ && ruff check app/ && pytest && python -c "from app.main import app"If any check fails, fix it before committing. Never commit code you haven't verified.
When a test fails, that's information:
Don't just fix the test to make it pass — understand why it failed and fix the root cause.
At minimum, every API should have these 5 tests (they catch the most bugs):
{error: {code, message}} shapeSearch the Tessl registry for framework-specific testing skills that provide concrete patterns:
tessl search --type skills "<your-framework> testing"Before your final commit, review every installed skill's checklist. This is the most commonly skipped step — agents install skills, read them, build features, then commit without checking whether they actually applied the guidance.
Go through each installed tile in .tessl/tiles/ and check:
| Skill type | Check this is in your code |
|---|---|
| Error handling | Error middleware/exception handlers registered, custom error classes used |
| Security | Helmet/Talisman/headers middleware installed AND configured |
| Logging | Structured logger (pino/structlog) installed AND used in routes — no console.log/print |
| Health checks | GET /health endpoint exists and returns 200 |
| Testing | Test files exist, tests run and pass |
| Accessibility | ARIA attributes on interactive elements, labels on all inputs |
| Database | WAL mode enabled, foreign keys ON, indexes on FK columns |
| Configuration | Port + DB path from env vars, not hardcoded |
If any installed skill's core pattern is missing from your code, add it now. An installed skill that isn't applied is wasted.
Before writing features:
.listen() on import)With every feature:
Before committing (CRITICAL):
Before every commit:
skills
skill-discovery
verification-strategy