Trino distributed SQL query engine client with BasicAuth, timezone session configuration, catalog/schema/table three-part naming, and column name normalization for Snowflake compatibility.
ALL column names in Trino query results are automatically converted to lowercase.
This is a CRITICAL behavior that affects ALL code using Trino query results:
// ❌ WRONG - Will fail with uppercase column references
const result = await client.runQuery('SELECT USER_ID, EMAIL FROM users');
console.log(result.rows[0].USER_ID); // undefined!
console.log(result.rows[0].EMAIL); // undefined!
// ✅ CORRECT - Must use lowercase column names
const result = await client.runQuery('SELECT USER_ID, EMAIL FROM users');
console.log(result.rows[0].user_id); // Works!
console.log(result.rows[0].email); // Works!Why This Happens:
normalizeColumnName() functionQUOTED_IDENTIFIERS_IGNORE_CASE = FALSE can be setImpact on Your Code:
Trino warehouse client implementation with HTTP protocol and BasicAuth.
/**
* Trino warehouse client with BasicAuth and timezone support
*
* IMPORTANT: Column names are automatically normalized to lowercase during query result processing.
* This normalization happens within the TrinoWarehouseClient when streaming query results.
* When using Trino with Snowflake connectors (or other data sources that return uppercase column names),
* all column names in query results will be converted to lowercase (e.g., USER_ID -> user_id).
* Applications MUST reference columns using lowercase names when accessing Trino query results.
*/
class TrinoWarehouseClient implements WarehouseClient {
/** Connection options for Trino connector (from 'trino-client' package) */
connectionOptions: ConnectionOptions;
/**
* Initialize Trino client with credentials
* Uses BasicAuth with username/password over HTTP
* @param credentials - Trino credentials with host, port, catalog, schema, and auth
*/
constructor(credentials: CreateTrinoCredentials);
/**
* Test the database connection
* Executes a simple query (SELECT 1) to verify connectivity and credentials
* @returns Promise that resolves if connection is successful
* @throws {WarehouseConnectionError} If connection fails
* @throws {WarehouseQueryError} If test query fails
*/
test(): Promise<void>;
/**
* Stream query results with timezone session header
* Column names are normalized to lowercase
* @param sql - SQL query string to execute
* @param streamCallback - Callback invoked for each batch of results
* @param options - Query execution options with tags and timezone
* @returns Promise that resolves when streaming completes
* @note Trino does NOT support positional parameters (values) or named query parameters (queryParams)
*/
streamQuery(
sql: string,
streamCallback: (data: WarehouseResults) => void,
options: {
tags?: Record<string, string>;
timezone?: string;
}
): Promise<void>;
/**
* Execute query and return all results
* Column names are normalized to lowercase
* Inherited from WarehouseBaseClient
* @param sql - SQL query string to execute
* @param tags - Optional tags for query tracking
* @param timezone - Optional timezone for query execution
* @returns Promise resolving to query results with fields and rows
* @deprecated Use streamQuery() instead to avoid loading all results into memory
* @note Trino does NOT support positional parameters (values) or named query parameters (queryParams)
*/
runQuery(
sql: string,
tags?: Record<string, string>,
timezone?: string
): Promise<WarehouseResults>;
/**
* Get catalog metadata for specific tables
* Queries information_schema.columns with catalog/schema/table filtering
* @param requests - Array of table identifiers (database=catalog, schema, table)
* @returns Promise resolving to nested catalog structure
*/
getCatalog(
requests: Array<{ database: string; schema: string; table: string }>
): Promise<WarehouseCatalog>;
/**
* Get all BASE TABLE tables across all schemas in the catalog
* Filters out system schemas (information_schema, pg_catalog)
* Uses catalog configured in credentials
* @returns Promise resolving to array of table metadata (database, schema, table)
* @note WarehouseTables is an array of { database: string; schema: string; table: string }[]
*/
getAllTables(): Promise<WarehouseTables>;
/**
* Get tables in specific catalog and schema
* @param schema - Optional schema name (uses credentials schema if omitted)
* @param tags - Optional tags for query tracking
* @returns Promise resolving to catalog with tables in schema
*/
getTables(
schema?: string,
tags?: Record<string, string>
): Promise<WarehouseCatalog>;
/**
* Get field metadata for specific table
* Uses three-part naming: catalog.schema.table
* @param tableName - Name of the table
* @param schema - Optional schema name (uses credentials schema if omitted)
* @param database - Optional catalog name (uses credentials catalog if omitted)
* @param tags - Optional tags for query tracking
* @returns Promise resolving to catalog with table's column definitions
*/
getFields(
tableName: string,
schema?: string,
database?: string,
tags?: Record<string, string>
): Promise<WarehouseCatalog>;
/**
* Parse warehouse errors into WarehouseQueryError
* Uses inherited base implementation from WarehouseBaseClient
* @param error - Error object from Trino
* @returns WarehouseQueryError with enhanced error message
*/
parseError(error: Error): Error;
/**
* Execute query asynchronously and optionally stream results
* Inherited from WarehouseBaseClient
* @param args - Async query execution arguments
* @param resultsStreamCallback - Callback for streaming results
* @returns Promise resolving to async query execution details
* @note Trino implementation uses streamQuery() internally
*/
executeAsyncQuery(
args: WarehouseExecuteAsyncQueryArgs,
resultsStreamCallback: (rows: WarehouseResults['rows'], fields: WarehouseResults['fields']) => void
): Promise<WarehouseExecuteAsyncQuery>;
}Usage Examples:
import { TrinoWarehouseClient } from '@lightdash/warehouses';
// Create Trino client with BasicAuth
const client = new TrinoWarehouseClient({
type: 'trino',
host: 'trino.example.com',
port: 8080,
user: 'analyst',
password: 'secure-password',
dbname: 'hive',
schema: 'default',
http_scheme: 'https',
startOfWeek: null,
});
// Test connection
await client.test();
// Stream query with timezone
await client.streamQuery(
`
SELECT
customer_id,
SUM(order_amount) as total_spent,
COUNT(*) as order_count
FROM hive.default.orders
WHERE order_date >= DATE '2023-01-01'
GROUP BY customer_id
ORDER BY total_spent DESC
LIMIT 100
`,
(data) => {
console.log('Received batch:', data.rows.length, 'rows');
// Column names are lowercase
data.rows.forEach(row => {
console.log(`Customer ${row.customer_id}: $${row.total_spent} (${row.order_count} orders)`);
});
},
{
timezone: 'America/New_York',
tags: { source: 'lightdash', report: 'top_customers' },
}
);
// Query with three-part naming (catalog.schema.table)
const results = await client.runQuery(
`
SELECT COUNT(*) as total
FROM hive.sales.transactions
WHERE transaction_type = 'purchase'
`,
{ source: 'analytics' }
);
console.log('Total purchases:', results.rows[0].total);
// Get all tables across catalogs
const tables = await client.getAllTables();
tables.forEach(table => {
console.log(`${table.database}.${table.schema}.${table.table}`);
});
// Get tables in specific catalog and schema
const hiveTables = await client.getTables('default'); // Uses 'hive' catalog from credentials
console.log('Hive tables:', hiveTables['hive']['default']);
// Get fields for specific table
const catalog = await client.getFields('customers', 'default', 'hive');
const columns = catalog['hive']['default']['customers'];
Object.entries(columns).forEach(([col, type]) => {
console.log(`${col}: ${type}`);
});SQL builder for Trino-specific syntax.
/**
* Trino SQL builder with double-quote quoting and APPROX_PERCENTILE
*/
class TrinoSqlBuilder implements WarehouseSqlBuilder {
/** Warehouse type identifier */
readonly type = WarehouseTypes.TRINO;
/**
* Get DBT adapter type for Trino
* @returns SupportedDbtAdapter.TRINO
*/
getAdapterType(): SupportedDbtAdapter;
/**
* Get escape string quote character for Trino
* @returns Single quote character "'"
*/
getEscapeStringQuoteChar(): string;
/**
* Get field quote character for Trino
* @returns Double quote character '"'
*/
getFieldQuoteChar(): string;
/**
* Get floating-point type name for Trino
* @returns 'DOUBLE'
*/
getFloatingType(): string;
/**
* Generate SQL for metric aggregation
* Uses APPROX_PERCENTILE for percentile/median metrics (approximate)
* @param sql - Column SQL expression
* @param metric - Metric definition
* @returns SQL string for the metric aggregation
*/
getMetricSql(sql: string, metric: Metric): string;
/**
* Escape string value for Trino
* Uses Trino-specific escaping rules
* @param value - Raw string value
* @returns Escaped string safe for Trino query
*/
escapeString(value: string): string;
}Usage Examples:
import { TrinoSqlBuilder } from '@lightdash/warehouses';
const builder = new TrinoSqlBuilder();
// Get quote characters
const fieldQuote = builder.getFieldQuoteChar(); // '"'
const fieldName = `${fieldQuote}order_id${fieldQuote}`; // "order_id"
// Get floating-point type
const floatType = builder.getFloatingType(); // 'DOUBLE'
// Escape string with single quotes
const escaped = builder.escapeString("O'Reilly");
// Result: 'O''Reilly'
// Generate metric SQL (uses approximate percentile)
const medianSql = builder.getMetricSql('price', {
type: MetricType.MEDIAN,
sql: 'price',
});
// Result: 'APPROX_PERCENTILE(price, 0.5)'
const percentileSql = builder.getMetricSql('response_time', {
type: MetricType.PERCENTILE,
sql: 'response_time',
percentile: 95,
});
// Result: 'APPROX_PERCENTILE(response_time, 0.95)'
// Standard aggregations
const sumSql = builder.getMetricSql('revenue', {
type: MetricType.SUM,
sql: 'revenue',
});
// Result: 'SUM(revenue)'Trino data type constants for field type checking.
// Import from @lightdash/warehouses
import { TrinoTypes } from '@lightdash/warehouses';
/**
* Trino data type enum
* Exported from @lightdash/warehouses
* Covers standard Trino types including complex types
*/
enum TrinoTypes {
// Boolean
BOOLEAN = 'boolean',
// Integer types
TINYINT = 'tinyint',
SMALLINT = 'smallint',
INTEGER = 'integer',
BIGINT = 'bigint',
// Floating-point types
REAL = 'real',
DOUBLE = 'double',
DECIMAL = 'decimal',
// String types
VARCHAR = 'varchar',
CHAR = 'char',
VARBINARY = 'varbinary',
// JSON
JSON = 'json',
// Date/Time types
DATE = 'date',
TIME = 'time',
TIME_TZ = 'time with time zone',
TIMESTAMP = 'timestamp',
TIMESTAMP_TZ = 'timestamp with time zone',
INTERVAL_YEAR_MONTH = 'interval year to month',
INTERVAL_DAY_TIME = 'interval day to second',
// Complex types
ARRAY = 'array',
MAP = 'map',
ROW = 'row',
// Special types
IPADDRESS = 'ipaddress',
UUID = 'uuid',
}The following methods are inherited from WarehouseBaseClient and WarehouseSqlBuilder, and are available on all Trino client instances:
/**
* Execute query and return all results in memory
* @deprecated Use streamQuery() to avoid loading all results into memory
*/
runQuery(
sql: string,
tags?: Record<string, string>,
timezone?: string,
values?: AnyType[],
queryParams?: Record<string, AnyType>
): Promise<WarehouseResults>;
/** Get the warehouse adapter type */
getAdapterType(): SupportedDbtAdapter;
/** Get string quote character (single quote) */
getStringQuoteChar(): string;
/** Get escape character for string quotes (backslash) */
getEscapeStringQuoteChar(): string;
/** Get field/identifier quote character (double quote for Trino) */
getFieldQuoteChar(): string;
/** Get floating-point type name (DOUBLE for Trino) */
getFloatingType(): string;
/**
* Generate SQL for metric aggregation
* @param sql - Column SQL expression
* @param metric - Metric definition with aggregation type
* @returns SQL string for the metric aggregation
*/
getMetricSql(sql: string, metric: Metric): string;
/** Concatenate strings using warehouse-specific SQL */
concatString(...args: string[]): string;
/** Escape string for SQL safety */
escapeString(value: string): string;
/** Get configured week start day */
getStartOfWeek(): WeekDay | null | undefined;
/** Parse catalog rows into nested structure */
parseWarehouseCatalog(
rows: Record<string, AnyType>[],
mapFieldType: (type: string) => DimensionType
): WarehouseCatalog;/**
* Trino warehouse credentials
*/
interface CreateTrinoCredentials {
/** Warehouse type discriminator */
type: 'trino';
/** Trino coordinator hostname */
host: string;
/** Trino coordinator port (default: 8080) */
port: number;
/** Username for BasicAuth */
user: string;
/** Password for BasicAuth */
password: string;
/** Catalog name (e.g., 'hive', 'postgresql', 'mysql') */
dbname: string;
/** Schema name */
schema: string;
/** HTTP scheme: 'http' or 'https' (default: 'https') */
http_scheme: string;
/** Optional source identifier for query tracking */
source?: string;
/** Week start day configuration (optional) */
startOfWeek?: WeekDay | null;
/** Require user to provide their own credentials (optional, default: false) */
requireUserCredentials?: boolean;
}
/**
* Trino connection options for connector
* From 'trino-client' package
*/
interface ConnectionOptions {
server: string;
catalog: string;
schema: string;
/** BasicAuth instance from trino-client package (not a plain object) */
auth: BasicAuth;
ssl?: boolean;
[key: string]: any;
}
/**
* BasicAuth class from trino-client package
*/
class BasicAuth {
constructor(username: string, password: string);
}
/**
* Warehouse query results structure
* Used by streamQuery() and runQuery() methods
*/
interface WarehouseResults {
/** Field metadata mapping column names to their dimension types */
fields: Record<string, { type: DimensionType }>;
/** Array of result rows, each row is an object with column names as keys */
rows: Array<Record<string, AnyType>>;
}
/**
* Warehouse tables array
* Used by getAllTables() method
*/
type WarehouseTables = Array<{
/** Catalog name (database) */
database: string;
/** Schema name */
schema: string;
/** Table name */
table: string;
}>;USER_ID becomes user_id in query results. Applications must reference columns in lowercase when processing Trino results.Trino (formerly PrestoSQL) is a distributed SQL query engine that can query data from multiple sources:
The client connects to the Trino coordinator via HTTP and uses BasicAuth for authentication. Queries can span multiple catalogs using three-part naming (catalog.schema.table).