or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

bigquery-client.mdclickhouse-client.mdcore-types.mdcredentials.mddatabricks-client.mdfactory-functions.mdindex.mdpostgres-client.mdredshift-client.mdsnowflake-client.mdsql-builders.mdssh-tunnel.mdtrino-client.md
tile.json

sql-builders.mddocs/

SQL Builders

Warehouse-specific SQL generation for field quoting, string escaping, and metric aggregations.

Quick Reference

WarehouseField QuoteFloat TypePercentile FunctionConcat
BigQuery`FLOAT64APPROX_QUANTILESCONCAT()
ClickHouse"FLOATquantile() / median()CONCAT()
Databricks`FLOATPERCENTILE()CONCAT()
PostgreSQL"FLOATPERCENTILE_CONT||
Redshift"FLOATPERCENTILE_CONT||
Snowflake"FLOATPERCENTILE_CONTCONCAT()
Trino"DOUBLEAPPROX_PERCENTILECONCAT()

Factory Function

import { warehouseSqlBuilderFromType } from '@lightdash/warehouses';
import { SupportedDbtAdapter, WeekDay } from '@lightdash/common';

const builder = warehouseSqlBuilderFromType(SupportedDbtAdapter.POSTGRES, WeekDay.MONDAY);

Common Interface

All builders implement the WarehouseSqlBuilder interface:

/**
 * Common SQL builder interface
 * All warehouse-specific builders implement these methods
 * Note: Base class is not exported - use factory function or client instances
 */
interface WarehouseSqlBuilder {
  /**
   * Get configured week start day
   * @returns WeekDay enum value (0=Monday, 6=Sunday), null, or undefined
   */
  getStartOfWeek(): WeekDay | null | undefined;

  /**
   * Get DBT adapter type for this warehouse
   * @returns SupportedDbtAdapter enum value
   */
  getAdapterType(): SupportedDbtAdapter;

  /**
   * Get character used for quoting field/column names
   * @returns Quote character: '"' (most), '`' (BigQuery, Databricks)
   */
  getFieldQuoteChar(): string;

  /**
   * Get character used for quoting string literals
   * @returns Quote character: "'" (single quote for all warehouses)
   */
  getStringQuoteChar(): string;

  /**
   * Get character used for escaping strings
   * @returns Escape character: "\\" or "'" (varies by warehouse)
   */
  getEscapeStringQuoteChar(): string;

  /**
   * Get floating-point type name for this warehouse
   * @returns Type name: 'FLOAT', 'FLOAT64', 'DOUBLE', or 'DOUBLE PRECISION'
   */
  getFloatingType(): string;

  /**
   * Generate SQL for metric aggregation
   * @param sql - Column SQL expression
   * @param metric - Metric definition (type, sql, optional percentile)
   * @returns SQL string for the metric aggregation
   */
  getMetricSql(sql: string, metric: Metric): string;

  /**
   * Generate SQL for string concatenation
   * @param args - String expressions to concatenate
   * @returns SQL concatenation expression
   */
  concatString(...args: string[]): string;

  /**
   * Escape string value for SQL query (prevents SQL injection)
   * @param value - Raw string value
   * @returns Escaped string safe for SQL query
   */
  escapeString(value: string): string;
}

Usage via Client

Clients implement the SQL builder interface:

const client = warehouseClientFromCredentials(credentials);

// Get quote characters
const quote = client.getFieldQuoteChar();
const sql = `SELECT ${quote}user_id${quote} FROM ${quote}users${quote}`;

// Escape user input
const safe = client.escapeString(userInput);

// Generate metric SQL
const sumSql = client.getMetricSql('revenue', {
  type: MetricType.SUM,
  sql: 'revenue',
  // Note: Full Metric interface in @lightdash/common extends Field
  // and includes additional properties. For SQL builder usage, only
  // type, sql, and optional percentile are used.
});

Metric Interface (simplified for SQL builder usage):

import { type Metric, MetricType, FieldType } from '@lightdash/common';

/**
 * Metric definition for SQL generation
 * Note: The full Metric interface in @lightdash/common extends Field and includes
 * many additional required properties. SQL builders primarily use type, sql, and percentile.
 *
 * For getMetricSql() usage, you typically only need to provide:
 */
interface MetricForSqlBuilder {
  /** Metric aggregation type */
  type: MetricType;
  /** SQL expression for the metric (column name or calculation) */
  sql: string;
  /** Optional percentile value (0-100) for percentile metrics */
  percentile?: number;
}

/**
 * Complete Metric interface (from @lightdash/common)
 * Extends the Field interface with metric-specific properties
 */
interface Metric extends Field {
  /** Field type discriminator - must be FieldType.METRIC */
  fieldType: FieldType.METRIC;
  /** Metric aggregation type */
  type: MetricType;
  /** SQL expression for the metric */
  sql: string;
  /** Unique metric name within the table */
  name: string;
  /** Human-readable label */
  label: string;
  /** Table name this metric belongs to */
  table: string;
  /** Human-readable table label */
  tableLabel: string;
  /** Whether this metric is hidden */
  hidden: boolean;
  /** Optional percentile value (0-100) for percentile metrics */
  percentile?: number;
  /** Optional metric filters */
  filters?: MetricFilterRule[];
  /** Optional format options */
  formatOptions?: CustomFormat;
  /** Optional description */
  description?: string;
  // ... plus additional optional fields from Field interface
}

/**
 * Base Field interface (from @lightdash/common)
 * All metrics, dimensions, and table calculations extend this
 */
interface Field {
  fieldType: FieldType;
  type: string;
  name: string;
  label: string;
  table: string;
  tableLabel: string;
  sql: string;
  description?: string;
  hidden: boolean;
  format?: Format | string;
  groups?: string[];
  urls?: FieldUrl[];
  index?: number;
  tags?: string[];
}

Metric SQL Generation

Standard Metrics (All Warehouses)

MetricType.SUM          → SUM(col)
MetricType.COUNT        → COUNT(col)
MetricType.COUNT_DISTINCT → COUNT(DISTINCT col)
MetricType.MIN          → MIN(col)
MetricType.MAX          → MAX(col)
MetricType.AVERAGE      → AVG(col)
                          // PostgreSQL: AVG(col::DOUBLE PRECISION)

Percentile Metrics (Warehouse-Specific)

BigQuery (approximate):

MEDIAN     → APPROX_QUANTILES(col, 100)[OFFSET(50)]
PERCENTILE → APPROX_QUANTILES(col, 100)[OFFSET(95)]

ClickHouse:

MEDIAN     → median(col)
PERCENTILE → quantile(0.95)(col)

Databricks:

MEDIAN     → PERCENTILE(col, 0.5)
PERCENTILE → PERCENTILE(col, 0.95)

PostgreSQL/Snowflake (exact):

MEDIAN     → PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY col)
PERCENTILE → PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY col)

Trino (approximate):

MEDIAN     → APPROX_PERCENTILE(col, 0.5)
PERCENTILE → APPROX_PERCENTILE(col, 0.95)

String Escaping

All warehouses apply these escaping rules to prevent SQL injection.

Security Implementation: The escapeString() method performs multiple passes to sanitize input:

  1. Normalizes Unicode characters (converts to NFC form)
  2. Removes SQL comments: -- (line comments) and /* */ (block comments)
  3. Removes null bytes: \x00
  4. Applies warehouse-specific quote escaping (see below)
  5. Escapes backslashes for LIKE pattern wildcards where applicable

Common escaping for all warehouses:

  • Removes SQL comments: -- (line comments) and /* */ (block comments)
  • Removes null bytes: \x00
  • Unicode normalization (NFC form)

Warehouse-specific rules:

PostgreSQL/Redshift/Snowflake/Trino

  • Doubles single quotes: O'ReillyO''Reilly
  • Escapes backslashes: path\filepath\\file
  • Example: client.escapeString("O'Reilly's path\\file")O''Reilly''s path\\\\file

BigQuery

  • Backslash escapes single quotes: O'ReillyO\'Reilly
  • Backslash escapes double quotes: Say "Hi"Say \"Hi\"
  • Escapes backslashes: path\filepath\\file
  • Example: client.escapeString("O'Reilly's \"path\"")O\'Reilly\'s \"path\"

ClickHouse

  • Doubles single quotes: O'ReillyO''Reilly
  • Escapes backslashes: path\filepath\\file
  • Example: client.escapeString("O'Reilly's path\\file")O''Reilly''s path\\\\file

Databricks

  • Spark SQL escaping rules (similar to PostgreSQL)
  • Doubles single quotes: O'ReillyO''Reilly

Security Notes:

  • Always use escapeString() for user input in dynamic SQL
  • Prefer parameterized queries (values or queryParams) when available
  • Never concatenate raw user input into SQL queries

String Concatenation

PostgreSQL/Redshift (|| operator):

concatString('"first"', "' '", '"last"')
→ ("first" || ' ' || "last")

Others (CONCAT function):

concatString('"first"', "' '", '"last"')
→ CONCAT("first", ' ', "last")

Direct Import

import {
  BigquerySqlBuilder,
  // ClickhouseSqlBuilder - NOT exported
  DatabricksSqlBuilder,
  PostgresSqlBuilder,
  RedshiftSqlBuilder,
  SnowflakeSqlBuilder,
  TrinoSqlBuilder,
} from '@lightdash/warehouses';

const builder = new PostgresSqlBuilder();

Builder-Specific Notes

BigQuery

  • Backtick field quoting
  • FLOAT64 for floating-point
  • Approximate percentiles

ClickHouse

  • Single quote escape character (not backslash)
  • median() and quantile() functions
  • NOT exported (use factory)

Databricks

  • Backtick field quoting
  • PERCENTILE function

PostgreSQL

  • AVG() casts to DOUBLE PRECISION
  • || operator for concatenation
  • Single quote escape character

Redshift

  • Inherits all PostgreSQL behavior
  • Different adapter type only

Snowflake

  • PERCENTILE_CONT (exact)
  • Standard CONCAT function

Trino

  • DOUBLE for floating-point
  • Approximate percentiles
  • Single quote escape character

Week Start Configuration

const builder = warehouseSqlBuilderFromType(
  SupportedDbtAdapter.SNOWFLAKE,
  WeekDay.MONDAY
);

builder.getStartOfWeek(); // 0 (Monday)