Database architecture skills, docs, and rules for high-demand multi-tenant commerce platforms (PostgreSQL source of truth, Neo4j as derived GraphRAG projection, transactional outbox, RLS-based tenant isolation). Includes live schema introspection workflow via Postgres MCP.
97
97%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
These rules govern how the agent responds to "add an attribute / modify entity / model this concept" requests. They are always on.
The user often types something casual ("add a price field to product"). The agent never responds casually.
Every attribute-add request is mapped to exactly one of:
The agent walks all five options, gives a one-sentence "could fit" and "doesn't fit / best because" for each, and picks one with reasoning. The reasoning is part of the deliverable — never silent.
Default to Option E when the user's request reveals their mental model is misaligned with the canonical model (e.g., "color belongs on variant, not product"; "expiration belongs on lot, not product"; "selling price belongs on pricing rule, not product").
Before recommending any placement, the agent inspects the target table and its immediate neighbors via Postgres MCP. The cached snapshot is not enough — at the moment of the decision, ground truth is queried.
The agent specifically checks for duplicate columns. A target_sale_price
next to a price next to a compare_at_price is a smell that must be
surfaced.
The agent samples JSONB key distribution on any metadata column to
detect prior informal use of the proposed attribute.
The agent reads the last 20 entries of schema-change-log.md to find
prior decisions about the same entity. Settled decisions are cited, not
re-litigated.
Every proposal includes a 13-row blast radius table covering: incoming FKs, outgoing FKs, indexes touched, RLS policies, outbox events, Neo4j projection, tenant uniqueness, existing data shape, module ownership, compliance, read patterns, write patterns, API surface.
Inapplicable rows are marked N/A explicitly — never omitted.
Every schema change is delivered as expand/contract in four phases: Expand → Backfill → Cutover → Contract. Each phase has SQL, lock impact, rollback step, and backward-compatibility note.
Indexes are created with CREATE INDEX CONCURRENTLY.
Backfills are batched (typically 5,000 rows / batch), throttled, and idempotent.
The agent never applies a migration. It produces the migration file; the user runs it.
Every change includes test cases grouped by: unit, integration, tenant isolation (RLS positive + negative), migration (apply + rollback), projection sync (outbox → Neo4j), concurrency (relevant race), and idempotency (duplicate-request handling).
Tests are described specifically (which tenant, which RLS case, which race) — not abstractly.
If the change affects an entity that emits domain events, the agent proposes the event delta: new event name, payload schema delta, version bump, consumer impact.
If the change affects a Neo4j-projected entity, the projection worker delta is described.
schema-change-log.md,
shown to the user but not committed. The user adds it after applying
the migration./speckit.adr invocation before the migration
SQL is finalized:
Silent JSONB. Choosing JSONB to avoid the migration work, without justifying against § V.
Duplicate column. Adding a new column whose semantics overlap with an existing one, without first surfacing the overlap and proposing a promotion or rename.
Schema drift. Producing SQL that doesn't match current-schema-state.md
Skipping the snapshot refresh. If Last sync is > 24h or after a
known migration, the agent runs /speckit.dbsync first.
One-line answers. A casual question gets the full structured workflow output. Brevity is not kindness here — it's drift waiting to happen.
docs
skills
adr-drafting
commerce-database-architecture
graph-rag-boundary-review
mermaid-diagram-review
outbox-and-eventing-design
postgres-schema-introspection
schema-evolution-workflow