PostgreSQL patterns for Node.js with pg — connection pooling, parameterized
99
99%
Does it follow best practices?
Impact
100%
1.75xAverage score across 5 eval scenarios
Passed
No known issues
Every Node.js application that connects to PostgreSQL needs these defaults from the start. Do not wait to be asked — apply connection pooling with production settings, parameterized queries, proper type handling, and error code handling whenever you create or modify a Node.js service that uses PostgreSQL. These are not optimizations to add later; they are baseline requirements that prevent outages, security vulnerabilities, and subtle bugs.
Patterns for using PostgreSQL directly with the pg client — connection pooling, queries, transactions, type handling, and production hardening.
import { Pool } from 'pg';
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20, // Max connections in pool
idleTimeoutMillis: 30000, // Close idle clients after 30s
connectionTimeoutMillis: 5000, // Fail if can't connect in 5s
statement_timeout: 30000, // Kill queries running longer than 30s
keepAlive: true, // Prevent infrastructure from killing idle TCP connections
keepAliveInitialDelayMillis: 10000, // Start keepalive probes after 10s idle
ssl: process.env.NODE_ENV === 'production'
? { rejectUnauthorized: true }
: undefined,
});
pool.on('error', (err) => {
console.error('Unexpected pool error:', err);
});
// Use pool.query for single queries (auto-acquires and releases connection)
export async function query(text: string, params?: any[]) {
return pool.query(text, params);
}
export { pool };Client instancespool.query() for single queries — handles connection lifecycle automaticallypool.connect() only for transactions — must client.release() in finally blockstatement_timeout — prevents runaway queries from exhausting the poolkeepAlive: true — cloud load balancers and PgBouncer silently kill idle TCP connections (typically after 30-60s); keepalive probes prevent thisrejectUnauthorized: false in productionasync function shutdown() {
await pool.end(); // Drains active clients, disconnects, stops internal timers
process.exit(0);
}
process.on('SIGTERM', shutdown);
process.on('SIGINT', shutdown);Call pool.end() on process termination. Without it, open connections prevent the Node.js process from exiting.
// SAFE — parameterized ($1, $2, etc.)
const { rows } = await pool.query(
'SELECT * FROM orders WHERE status = $1 AND created_at > $2',
[status, since]
);
// VULNERABLE — never do this
const { rows } = await pool.query(`SELECT * FROM orders WHERE status = '${status}'`);// WRONG — dynamically building IN placeholders, error-prone and verbose
const placeholders = ids.map((_, i) => `$${i + 1}`).join(',');
await pool.query(`SELECT * FROM items WHERE id IN (${placeholders})`, ids);
// RIGHT — single parameter with ANY and array cast
const { rows } = await pool.query(
'SELECT * FROM items WHERE id = ANY($1::int[])',
[ids]
);
// Works with any type — text[], int[], uuid[], etc.
const { rows: articles } = await pool.query(
'SELECT * FROM articles WHERE status = ANY($1::text[])',
[statuses]
);
// For array columns (e.g. tags text[]), use overlap operator &&
const { rows: tagged } = await pool.query(
'SELECT * FROM articles WHERE tags && $1::text[]',
[tags]
);ANY($1::type[]) accepts a JavaScript array as a single parameter. It is simpler, safer (no dynamic SQL construction), and faster for large lists. For PostgreSQL array columns, use the && (overlap) operator instead of = ANY.
// Insert and return
const { rows: [order] } = await pool.query(
'INSERT INTO orders (customer_name, total_cents) VALUES ($1, $2) RETURNING *',
[customerName, totalCents]
);
// Single row lookup
const { rows: [item] } = await pool.query(
'SELECT * FROM menu_items WHERE id = $1',
[id]
);
if (!item) throw new NotFoundError('MenuItem', id);PostgreSQL BIGINT (int8) and NUMERIC columns are returned as JavaScript strings, not numbers. This is because JavaScript Number cannot safely represent 64-bit integers or arbitrary-precision decimals.
const { rows: [row] } = await pool.query('SELECT id FROM orders WHERE id = $1', [1]);
typeof row.id; // "string" — not "number"!
row.id === 1; // false — silent bugimport pg from 'pg';
// Parse int8 (OID 20) as JavaScript number — safe if values < Number.MAX_SAFE_INTEGER
pg.types.setTypeParser(20, (val) => parseInt(val, 10));
// Or use native BigInt if you need full 64-bit range
pg.types.setTypeParser(20, (val) => BigInt(val));// Parse numeric (OID 1700) as float — be aware of precision loss
pg.types.setTypeParser(1700, (val) => parseFloat(val));Configure type parsers once at startup, before creating the pool. Choose the parser based on your value ranges — parseInt is fine for IDs and counts, but use BigInt or keep as string for values that may exceed Number.MAX_SAFE_INTEGER (2^53 - 1).
async function createOrder(input: CreateOrderInput): Promise<Order> {
const client = await pool.connect();
try {
await client.query('BEGIN');
const { rows: [order] } = await client.query(
'INSERT INTO orders (customer_name, total_cents) VALUES ($1, $2) RETURNING *',
[input.customerName, input.totalCents]
);
for (const item of input.items) {
await client.query(
'INSERT INTO order_items (order_id, menu_item_id, quantity, price_cents) VALUES ($1, $2, $3, $4)',
[order.id, item.menuItemId, item.quantity, item.priceCents]
);
}
await client.query('COMMIT');
return order;
} catch (err) {
await client.query('ROLLBACK');
throw err;
} finally {
client.release(); // CRITICAL — always release in finally
}
}Always release the client in finally. A leaked connection exhausts the pool.
Avoid inserting rows one at a time in a loop — each INSERT is a network round trip. Use unnest to insert many rows in a single query:
async function insertOrderItems(
client: pg.PoolClient,
orderId: number,
items: { menuItemId: number; quantity: number; priceCents: number }[]
): Promise<void> {
await client.query(
`INSERT INTO order_items (order_id, menu_item_id, quantity, price_cents)
SELECT $1, unnest($2::int[]), unnest($3::int[]), unnest($4::int[])`,
[
orderId,
items.map(i => i.menuItemId),
items.map(i => i.quantity),
items.map(i => i.priceCents),
]
);
}unnest expands parallel arrays into rows. This is 2x+ faster than individual INSERTs and the advantage grows with wider tables and larger batches.
Handle specific PostgreSQL error codes instead of catching generically. The error object from pg includes a code property with a 5-character SQLSTATE code:
try {
await pool.query('INSERT INTO users (email) VALUES ($1)', [email]);
} catch (err: any) {
switch (err.code) {
case '23505': // unique_violation
throw new ConflictError('Email already exists');
case '23503': // foreign_key_violation
throw new BadRequestError('Referenced record not found');
case '23514': // check_violation
throw new BadRequestError('Value out of allowed range');
default:
throw err;
}
}| Code | Name | Typical cause |
|---|---|---|
23505 | unique_violation | Duplicate key on INSERT/UPDATE |
23503 | foreign_key_violation | Referenced row doesn't exist |
23514 | check_violation | CHECK constraint failed |
23502 | not_null_violation | Required column is NULL |
57014 | query_canceled | statement_timeout hit |
40001 | serialization_failure | Concurrent transaction conflict |
40P01 | deadlock_detected | Two transactions waiting on each other |
Use versioned SQL migration files — never run DDL at application startup:
-- migrations/001_initial.sql
CREATE TABLE orders (
id SERIAL PRIMARY KEY,
customer_name VARCHAR(100) NOT NULL,
status VARCHAR(20) NOT NULL DEFAULT 'received'
CHECK (status IN ('received', 'preparing', 'ready', 'picked_up', 'cancelled')),
total_cents INTEGER NOT NULL CHECK (total_cents >= 0),
created_at TIMESTAMPTZ DEFAULT NOW() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT NOW() NOT NULL
);
CREATE INDEX idx_orders_status ON orders(status);TIMESTAMPTZ (not TIMESTAMP) for all time columnsCHECK constraints for enum-like columnsnode-pg-migrate, dbmate, or custom)max, idleTimeoutMillis, connectionTimeoutMillis, statement_timeoutkeepAlive: true with keepAliveInitialDelayMillisDATABASE_URL from environment variablepool.end() on SIGTERM/SIGINTANY($1::type[]), not dynamic IN clausespool.connect() with finally { client.release() }unnest or multi-row VALUES, not loopsRETURNING * on INSERT/UPDATE for immediate resultpg packagemax, timeouts, keepAlive, allowExitOnIdlepg converts PostgreSQL types to JavaScriptsetTypeParser APIANY($1::type[]) is preferred over dynamic INpool.end() is needed and how to use it