Database architecture skills, docs, and rules for high-demand multi-tenant commerce platforms (PostgreSQL source of truth, Neo4j as derived GraphRAG projection, transactional outbox, RLS-based tenant isolation). Includes live schema introspection workflow via Postgres MCP.
97
97%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
These rules govern every interaction the agent has with a Postgres MCP server. They are always on.
The user's database is real production-shaped data even in development. Mistakes at this layer are not undoable.
Forbidden: @modelcontextprotocol/server-postgres. This package was
archived July 2025 with a known SQL injection vulnerability (Datadog
Security Labs disclosure, May 2025). Even if it appears in old .mcp.json
files, it must be replaced.
Approved:
postgres-mcp-pro (Crystal DBA) — preferred for general use; supports
restricted access mode and explicit read/write configurationpgEdge Postgres MCP — read-only by default, full schema introspection,
pg_stat_statements exposure?read_only=true or equivalentThe MCP role is read-only. Either:
GRANT SELECT ON ALL TABLES IN SCHEMA <schemas>
and no DML/DDL grants, ORThe MCP role has SET default_transaction_read_only = on at the role
level.
The MCP role has SET statement_timeout = '5s' at the role level.
The MCP role has SET idle_in_transaction_session_timeout = '10s'.
The MCP role has no access to:
pg_terminate_backend, dblink_*,
pg_read_file, pg_advisory_lock)Production MCP queries run against a read replica, not the primary. If a connection string targets a primary in production, the agent warns and asks for confirmation.
Connection strings are never committed. .mcp.json references
environment variables only:
"env": { "DATABASE_URI": "${env:READONLY_DATABASE_URL}" }The MCP server runs as a separate process under the developer's user — never as root, never inside the application container.
No SELECT * from business tables. Project only the columns the
introspection requires. Tenant data should not enter the LLM context
unless the user explicitly asks.
Every query has a LIMIT when targeting a table over 10,000 rows.
For sampling, use TABLESAMPLE SYSTEM (1).
No DDL or DML through the MCP. Even if the role is misconfigured to allow it, the agent refuses.
No exploratory SELECT against tables with regulated data
(prescription_records, payment_card_metadata) unless the user has
explicitly scoped the request.
MCP queries against production replicas are logged with: timestamp, role, query text, row count, duration, request_id. The log lives in the database operator's observability stack, not in the project repo.
The agent surfaces every MCP call in its response. The user can see which queries ran and why.
If the MCP server is unavailable, the agent uses the cached snapshot in
.specify/memory/current-schema-state.md and clearly states that it's
operating on cached data.
If a query returns suspicious data (e.g., a column type the snapshot didn't have, a missing table), the agent re-runs the introspection rather than reasoning on the discrepancy.
If the agent detects the MCP role has more privileges than declared above (e.g., DML grants exist), it stops and reports a § XV violation before doing any introspection.
MCP results are factual for current state, but they are not authoritative for future state. Architectural decisions still flow through the constitution. The fact that a column exists in the database does not retroactively justify it.
The agent never quotes user PII discovered via MCP back into the chat transcript. Counts, types, and structural facts are fine; row contents are not.
docs
skills
adr-drafting
commerce-database-architecture
graph-rag-boundary-review
mermaid-diagram-review
outbox-and-eventing-design
postgres-schema-introspection
schema-evolution-workflow