or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

bigquery-client.mdclickhouse-client.mdcore-types.mdcredentials.mddatabricks-client.mdfactory-functions.mdindex.mdpostgres-client.mdredshift-client.mdsnowflake-client.mdsql-builders.mdssh-tunnel.mdtrino-client.md
tile.json

trino-client.mddocs/

Trino Client

Trino distributed SQL engine client with BasicAuth.

Quick Start

import { TrinoWarehouseClient } from '@lightdash/warehouses';

const client = new TrinoWarehouseClient({
  type: 'trino',
  host: 'trino.example.com',
  port: 8080,
  user: 'analyst',
  password: 'mypassword',
  dbname: 'hive', // Catalog name
  schema: 'default',
  http_scheme: 'https',
});

await client.test();

Key Features

  • Distributed SQL: Query across multiple data sources
  • Three-Part Naming: catalog.schema.table
  • BasicAuth: Simple HTTP authentication
  • Timezone Session Config: Via HTTP headers
  • Column Normalization: Lowercase for consistency

Common Patterns

Query with Three-Part Naming

await client.streamQuery(
  `SELECT
     customer_id,
     SUM(order_amount) as total
   FROM hive.default.orders
   WHERE order_date >= DATE '2023-01-01'
   GROUP BY customer_id`,
  (data) => {
    // Column names are lowercase
    data.rows.forEach(row => {
      console.log(`Customer ${row.customer_id}: $${row.total}`);
    });
  },
  { timezone: 'America/New_York', tags: {} }
);

Query Across Catalogs

await client.streamQuery(
  `SELECT h.user_id, p.email
   FROM hive.sales.users h
   JOIN postgresql.public.profiles p ON h.user_id = p.id`,
  (data) => { /* process */ },
  { tags: {} }
);

SQL Builder

import { TrinoSqlBuilder } from '@lightdash/warehouses';

const builder = new TrinoSqlBuilder();
builder.getFieldQuoteChar(); // '"'
builder.getFloatingType(); // 'DOUBLE'
builder.escapeString("O'Reilly"); // "O''Reilly"

Percentile SQL: Uses APPROX_PERCENTILE (approximate)

// Median: APPROX_PERCENTILE(col, 0.5)
// 95th: APPROX_PERCENTILE(col, 0.95)

Field Type Enum

import { TrinoTypes } from '@lightdash/warehouses';

enum TrinoTypes {
  BOOLEAN, TINYINT, SMALLINT, INTEGER, BIGINT,
  REAL, DOUBLE, DECIMAL, VARCHAR, CHAR, VARBINARY, JSON,
  DATE, TIME, TIME_TZ, TIMESTAMP, TIMESTAMP_TZ,
  ARRAY, MAP, ROW, IPADDRESS, UUID,
}

Architecture

  • Catalogs: Data sources (hive, postgresql, mysql, etc.)
  • Schemas: Namespaces within catalogs
  • Tables: Tables/views within schemas
  • Coordinator: Receives queries (connection endpoint)
  • Workers: Execute query fragments

Timezone Configuration

Set via X-Trino-Session HTTP header:

{ timezone: 'America/New_York' }

Column Names

Important: All column names normalized to lowercase for Snowflake compatibility.

Metadata Operations

  • getAllTables(): Filters out information_schema and system schemas
  • Uses information_schema.columns for catalog queries
  • Three-part identifiers: catalog.schema.table

Notes

  • HTTP protocol (not JDBC)
  • Approximate aggregations for performance
  • System schemas automatically filtered
  • ANSI SQL compliant