CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl-labs/postgresql-node-best-practices

PostgreSQL patterns for Node.js with pg — connection pooling, parameterized

99

1.75x
Quality

99%

Does it follow best practices?

Impact

100%

1.75x

Average score across 5 eval scenarios

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files
name:
postgresql-node-best-practices
description:
PostgreSQL best practices that must be applied by default in every Node.js application using the pg client. Covers connection pooling with production settings, parameterized queries, transactions, BIGINT/NUMERIC type handling, PostgreSQL error code handling, bulk inserts with unnest, and production hardening (SSL, keepAlive, graceful shutdown). Always apply these patterns when creating or modifying any Node.js service that connects to PostgreSQL.
keywords:
postgresql, postgres, pg, node-postgres, connection pool, parameterized queries, postgres transactions, pg pool, postgres node, database connection, postgres typescript, bigint, ssl, statement_timeout, unnest, bulk insert, error codes, graceful shutdown
license:
MIT

PostgreSQL + Node.js (pg)

Every Node.js application that connects to PostgreSQL needs these defaults from the start. Do not wait to be asked — apply connection pooling with production settings, parameterized queries, proper type handling, and error code handling whenever you create or modify a Node.js service that uses PostgreSQL. These are not optimizations to add later; they are baseline requirements that prevent outages, security vulnerabilities, and subtle bugs.

Patterns for using PostgreSQL directly with the pg client — connection pooling, queries, transactions, type handling, and production hardening.


Connection Pool

import { Pool } from 'pg';

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 20,                            // Max connections in pool
  idleTimeoutMillis: 30000,           // Close idle clients after 30s
  connectionTimeoutMillis: 5000,      // Fail if can't connect in 5s
  statement_timeout: 30000,           // Kill queries running longer than 30s
  keepAlive: true,                    // Prevent infrastructure from killing idle TCP connections
  keepAliveInitialDelayMillis: 10000, // Start keepalive probes after 10s idle
  ssl: process.env.NODE_ENV === 'production'
    ? { rejectUnauthorized: true }
    : undefined,
});

pool.on('error', (err) => {
  console.error('Unexpected pool error:', err);
});

// Use pool.query for single queries (auto-acquires and releases connection)
export async function query(text: string, params?: any[]) {
  return pool.query(text, params);
}

export { pool };

Key rules

  • Always use the pool, never create individual Client instances
  • pool.query() for single queries — handles connection lifecycle automatically
  • pool.connect() only for transactions — must client.release() in finally block
  • Connection string from env — never hardcode credentials
  • statement_timeout — prevents runaway queries from exhausting the pool
  • keepAlive: true — cloud load balancers and PgBouncer silently kill idle TCP connections (typically after 30-60s); keepalive probes prevent this
  • SSL in production — cloud PostgreSQL (RDS, Cloud SQL, Neon, Supabase) requires SSL; never set rejectUnauthorized: false in production

Graceful shutdown

async function shutdown() {
  await pool.end(); // Drains active clients, disconnects, stops internal timers
  process.exit(0);
}
process.on('SIGTERM', shutdown);
process.on('SIGINT', shutdown);

Call pool.end() on process termination. Without it, open connections prevent the Node.js process from exiting.


Parameterized Queries

// SAFE — parameterized ($1, $2, etc.)
const { rows } = await pool.query(
  'SELECT * FROM orders WHERE status = $1 AND created_at > $2',
  [status, since]
);

// VULNERABLE — never do this
const { rows } = await pool.query(`SELECT * FROM orders WHERE status = '${status}'`);

Array parameters — use ANY, not IN

// WRONG — dynamically building IN placeholders, error-prone and verbose
const placeholders = ids.map((_, i) => `$${i + 1}`).join(',');
await pool.query(`SELECT * FROM items WHERE id IN (${placeholders})`, ids);

// RIGHT — single parameter with ANY and array cast
const { rows } = await pool.query(
  'SELECT * FROM items WHERE id = ANY($1::int[])',
  [ids]
);

// Works with any type — text[], int[], uuid[], etc.
const { rows: articles } = await pool.query(
  'SELECT * FROM articles WHERE status = ANY($1::text[])',
  [statuses]
);

// For array columns (e.g. tags text[]), use overlap operator &&
const { rows: tagged } = await pool.query(
  'SELECT * FROM articles WHERE tags && $1::text[]',
  [tags]
);

ANY($1::type[]) accepts a JavaScript array as a single parameter. It is simpler, safer (no dynamic SQL construction), and faster for large lists. For PostgreSQL array columns, use the && (overlap) operator instead of = ANY.

Returning rows

// Insert and return
const { rows: [order] } = await pool.query(
  'INSERT INTO orders (customer_name, total_cents) VALUES ($1, $2) RETURNING *',
  [customerName, totalCents]
);

// Single row lookup
const { rows: [item] } = await pool.query(
  'SELECT * FROM menu_items WHERE id = $1',
  [id]
);
if (!item) throw new NotFoundError('MenuItem', id);

Type Coercion Pitfalls

PostgreSQL BIGINT (int8) and NUMERIC columns are returned as JavaScript strings, not numbers. This is because JavaScript Number cannot safely represent 64-bit integers or arbitrary-precision decimals.

const { rows: [row] } = await pool.query('SELECT id FROM orders WHERE id = $1', [1]);
typeof row.id; // "string" — not "number"!
row.id === 1;  // false — silent bug

Fixing BIGINT parsing

import pg from 'pg';

// Parse int8 (OID 20) as JavaScript number — safe if values < Number.MAX_SAFE_INTEGER
pg.types.setTypeParser(20, (val) => parseInt(val, 10));

// Or use native BigInt if you need full 64-bit range
pg.types.setTypeParser(20, (val) => BigInt(val));

Fixing NUMERIC parsing

// Parse numeric (OID 1700) as float — be aware of precision loss
pg.types.setTypeParser(1700, (val) => parseFloat(val));

Configure type parsers once at startup, before creating the pool. Choose the parser based on your value ranges — parseInt is fine for IDs and counts, but use BigInt or keep as string for values that may exceed Number.MAX_SAFE_INTEGER (2^53 - 1).


Transactions

async function createOrder(input: CreateOrderInput): Promise<Order> {
  const client = await pool.connect();
  try {
    await client.query('BEGIN');

    const { rows: [order] } = await client.query(
      'INSERT INTO orders (customer_name, total_cents) VALUES ($1, $2) RETURNING *',
      [input.customerName, input.totalCents]
    );

    for (const item of input.items) {
      await client.query(
        'INSERT INTO order_items (order_id, menu_item_id, quantity, price_cents) VALUES ($1, $2, $3, $4)',
        [order.id, item.menuItemId, item.quantity, item.priceCents]
      );
    }

    await client.query('COMMIT');
    return order;
  } catch (err) {
    await client.query('ROLLBACK');
    throw err;
  } finally {
    client.release(); // CRITICAL — always release in finally
  }
}

Always release the client in finally. A leaked connection exhausts the pool.


Bulk Inserts with UNNEST

Avoid inserting rows one at a time in a loop — each INSERT is a network round trip. Use unnest to insert many rows in a single query:

async function insertOrderItems(
  client: pg.PoolClient,
  orderId: number,
  items: { menuItemId: number; quantity: number; priceCents: number }[]
): Promise<void> {
  await client.query(
    `INSERT INTO order_items (order_id, menu_item_id, quantity, price_cents)
     SELECT $1, unnest($2::int[]), unnest($3::int[]), unnest($4::int[])`,
    [
      orderId,
      items.map(i => i.menuItemId),
      items.map(i => i.quantity),
      items.map(i => i.priceCents),
    ]
  );
}

unnest expands parallel arrays into rows. This is 2x+ faster than individual INSERTs and the advantage grows with wider tables and larger batches.


PostgreSQL Error Codes

Handle specific PostgreSQL error codes instead of catching generically. The error object from pg includes a code property with a 5-character SQLSTATE code:

try {
  await pool.query('INSERT INTO users (email) VALUES ($1)', [email]);
} catch (err: any) {
  switch (err.code) {
    case '23505': // unique_violation
      throw new ConflictError('Email already exists');
    case '23503': // foreign_key_violation
      throw new BadRequestError('Referenced record not found');
    case '23514': // check_violation
      throw new BadRequestError('Value out of allowed range');
    default:
      throw err;
  }
}

Common codes

CodeNameTypical cause
23505unique_violationDuplicate key on INSERT/UPDATE
23503foreign_key_violationReferenced row doesn't exist
23514check_violationCHECK constraint failed
23502not_null_violationRequired column is NULL
57014query_canceledstatement_timeout hit
40001serialization_failureConcurrent transaction conflict
40P01deadlock_detectedTwo transactions waiting on each other

Migrations

Use versioned SQL migration files — never run DDL at application startup:

-- migrations/001_initial.sql
CREATE TABLE orders (
  id SERIAL PRIMARY KEY,
  customer_name VARCHAR(100) NOT NULL,
  status VARCHAR(20) NOT NULL DEFAULT 'received'
    CHECK (status IN ('received', 'preparing', 'ready', 'picked_up', 'cancelled')),
  total_cents INTEGER NOT NULL CHECK (total_cents >= 0),
  created_at TIMESTAMPTZ DEFAULT NOW() NOT NULL,
  updated_at TIMESTAMPTZ DEFAULT NOW() NOT NULL
);

CREATE INDEX idx_orders_status ON orders(status);
  • Use TIMESTAMPTZ (not TIMESTAMP) for all time columns
  • Add CHECK constraints for enum-like columns
  • Add indexes on foreign keys and columns used in WHERE/JOIN clauses
  • Use a migration runner (node-pg-migrate, dbmate, or custom)

Checklist

  • Pool with max, idleTimeoutMillis, connectionTimeoutMillis, statement_timeout
  • keepAlive: true with keepAliveInitialDelayMillis
  • DATABASE_URL from environment variable
  • SSL configured for production
  • Graceful shutdown with pool.end() on SIGTERM/SIGINT
  • All queries parameterized ($1, $2 — no string interpolation)
  • Array parameters use ANY($1::type[]), not dynamic IN clauses
  • BIGINT/NUMERIC type parsers configured if these types are used
  • Transactions use pool.connect() with finally { client.release() }
  • Bulk inserts use unnest or multi-row VALUES, not loops
  • PostgreSQL error codes handled (23505, 23503, etc.)
  • RETURNING * on INSERT/UPDATE for immediate result
  • Pool error handler registered
  • Migrations in version-controlled SQL files
  • Indexes on foreign keys and filtered columns

References

Verifiers

  • pg-pool-setup — Connection pooling with proper configuration
Workspace
tessl-labs
Visibility
Public
Created
Last updated
Publish Source
CLI
Badge
tessl-labs/postgresql-node-best-practices badge