or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

index.md
tile.json

tessl/maven-org-apache-flink--flink-table-calcite-bridge

Bridge module providing Calcite integration APIs for Apache Flink's Table API planner plugins

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
mavenpkg:maven/org.apache.flink/flink-table-calcite-bridge@2.1.x

To install, run

npx @tessl/cli install tessl/maven-org-apache-flink--flink-table-calcite-bridge@2.1.0

index.mddocs/

Apache Flink Table Calcite Bridge

Bridge module containing Calcite dependencies for writing planner plugins (e.g., SQL dialects) that interact with Calcite APIs. This module provides the ability to create RelNode instances by accessing RelOptCluster, RelBuilder, and other Calcite components provided by Flink's planner context, enabling custom SQL dialect development and advanced query planning extensions.

Package Information

  • Package Name: flink-table-calcite-bridge
  • Package Type: Maven
  • Language: Java
  • Installation:
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-table-calcite-bridge</artifactId>
        <version>2.1.0</version>
    </dependency>

Core Imports

import org.apache.flink.table.calcite.bridge.CalciteContext;
import org.apache.flink.table.calcite.bridge.PlannerExternalQueryOperation;

Basic Usage

import org.apache.flink.table.calcite.bridge.CalciteContext;
import org.apache.flink.table.calcite.bridge.PlannerExternalQueryOperation;
import org.apache.calcite.rel.RelNode;
import org.apache.calcite.tools.RelBuilder;
import org.apache.calcite.plan.RelOptCluster;
import org.apache.flink.table.catalog.ResolvedSchema;
import org.apache.flink.table.operations.QueryOperationVisitor;

// CalciteContext is provided by Flink's table planner implementation
CalciteContext context = // ... obtained from planner context

// Access core Calcite components
RelOptCluster cluster = context.getCluster();
RelBuilder relBuilder = context.createRelBuilder();

// Build a relational expression programmatically
RelNode relNode = relBuilder
    .scan("MyTable")
    .filter(relBuilder.call("=", relBuilder.field("status"), relBuilder.literal("active")))
    .project(relBuilder.field("id"), relBuilder.field("name"))
    .build();

// Create resolved schema matching the projection
ResolvedSchema schema = // ... define schema with id and name columns

// Wrap in PlannerExternalQueryOperation for Flink integration
PlannerExternalQueryOperation operation = 
    new PlannerExternalQueryOperation(relNode, schema);

// Use the operation in Flink's query operation framework
RelNode calciteTree = operation.getCalciteTree();
ResolvedSchema resolvedSchema = operation.getResolvedSchema();
String summary = operation.asSummaryString();

// Visit the operation using visitor pattern
QueryOperationVisitor<String> visitor = // ... implement visitor
String result = operation.accept(visitor);

Architecture

The module provides two key components that bridge Flink's Table API with Calcite's planning infrastructure:

  • CalciteContext: Interface providing access to Calcite's planning components (RelOptCluster, RelBuilder, type factory, catalog reader) through Flink's table planner
  • PlannerExternalQueryOperation: Wrapper that encapsulates Calcite RelNode instances with Flink's resolved schemas, enabling seamless integration with Flink's query operation model

This design enables plugin developers to leverage Calcite's full relational algebra capabilities while maintaining compatibility with Flink's distributed processing runtime.

Note: Both CalciteContext and PlannerExternalQueryOperation are marked with @Internal annotations, indicating they are intended for internal Flink use and may change without notice between versions.

Capabilities

Calcite Context Access

Provides access to Calcite's core planning infrastructure through Flink's table planner, enabling creation of RelNode instances and access to optimization components.

public interface CalciteContext extends ParserFactory.Context {
    CalciteCatalogReader createCatalogReader(boolean lenientCaseSensitivity);
    RelOptCluster getCluster();
    FrameworkConfig createFrameworkConfig();
    RelDataTypeFactory getTypeFactory();
    RelBuilder createRelBuilder();
    TableConfig getTableConfig();
    ClassLoader getClassLoader();
    FunctionCatalog getFunctionCatalog();
    RelOptTable.ToRelContext createToRelContext();
    CatalogRegistry getCatalogRegistry(); // Inherited from ParserFactory.Context
}

CalciteCatalogReader createCatalogReader(boolean lenientCaseSensitivity)

Creates a Calcite catalog reader for metadata access during planning.

  • Parameters:
    • lenientCaseSensitivity (boolean): Whether to use lenient case sensitivity for identifiers
  • Returns: CalciteCatalogReader - Catalog reader instance for accessing table and function metadata

RelOptCluster getCluster()

Returns the optimization cluster containing the planner, cost model, and metadata repository.

  • Returns: RelOptCluster - The optimization cluster used for planning

FrameworkConfig createFrameworkConfig()

Creates Calcite framework configuration with Flink-specific settings.

  • Returns: FrameworkConfig - Configuration object for Calcite framework

RelDataTypeFactory getTypeFactory()

Returns Calcite's type factory for creating and managing data types.

  • Returns: RelDataTypeFactory - Type factory for creating Calcite data types

RelBuilder createRelBuilder()

Creates a RelBuilder for programmatic construction of relational expressions.

  • Returns: RelBuilder - Builder for creating RelNode instances programmatically

TableConfig getTableConfig()

Returns Flink's table configuration.

  • Returns: TableConfig - Flink table environment configuration

ClassLoader getClassLoader()

Returns the class loader defined in the table environment.

  • Returns: ClassLoader - Class loader for loading user-defined classes

FunctionCatalog getFunctionCatalog()

Returns Flink's function catalog for accessing built-in and user-defined functions.

  • Returns: FunctionCatalog - Catalog containing available functions

RelOptTable.ToRelContext createToRelContext()

Create a new instance of RelOptTable.ToRelContext provided by Flink's table planner. The ToRelContext is used to convert a table into a relational expression.

  • Returns: RelOptTable.ToRelContext - Context for converting tables to relational expressions

CatalogRegistry getCatalogRegistry()

Return the CatalogRegistry defined in TableEnvironment. This method is inherited from ParserFactory.Context.

  • Returns: CatalogRegistry - Registry for managing catalog instances in the table environment

Planner Query Operation Wrapper

Wrapper for Calcite RelNode instances with resolved schemas, enabling integration with Flink's query operation model.

public class PlannerExternalQueryOperation implements QueryOperation {
    public PlannerExternalQueryOperation(RelNode relNode, ResolvedSchema resolvedSchema);
    public RelNode getCalciteTree();
    public ResolvedSchema getResolvedSchema();
    public List<QueryOperation> getChildren();
    public <T> T accept(QueryOperationVisitor<T> visitor);
    public String asSummaryString();
    public String asSerializableString(); // Inherited default method
    public String asSerializableString(SqlFactory sqlFactory); // Inherited default method
}

Constructor PlannerExternalQueryOperation(RelNode relNode, ResolvedSchema resolvedSchema)

Wrapper for valid logical plans and resolved schema generated by Planner. It's mainly used by pluggable dialect which will generate Calcite RelNode in planning phase.

  • Parameters:
    • relNode (RelNode): The Calcite relational expression representing the logical plan
    • resolvedSchema (ResolvedSchema): The resolved schema describing the operation's output structure

RelNode getCalciteTree()

Returns the wrapped Calcite RelNode representing the logical plan.

  • Returns: RelNode - The Calcite relational expression

ResolvedSchema getResolvedSchema()

Returns the resolved schema describing the operation's output structure.

  • Returns: ResolvedSchema - Schema with column names and types

List<QueryOperation> getChildren()

Returns child query operations (always empty for this wrapper).

  • Returns: List<QueryOperation> - Empty list as this is a leaf operation

<T> T accept(QueryOperationVisitor<T> visitor)

Accepts a visitor for traversing the query operation tree.

  • Parameters:
    • visitor (QueryOperationVisitor<T>): Visitor to accept
  • Returns: T - Result of visitor's visit method

String asSummaryString()

Returns a string representation for debugging and logging. Implementation uses OperationUtils.formatWithChildren with "PlannerCalciteQueryOperation".

  • Returns: String - Summary string representation for debugging

String asSerializableString()

Returns a serializable string representation of the operation. This is a default method inherited from QueryOperation.

  • Returns: String - Serializable string representation

String asSerializableString(SqlFactory sqlFactory)

Returns a serializable string representation of the operation using the provided SQL factory. This is a default method inherited from QueryOperation.

  • Parameters:
    • sqlFactory (SqlFactory): Factory for SQL string generation
  • Returns: String - Serializable string representation using the provided factory

Types

Key Calcite Types

// Calcite core types used in the API
interface RelNode {
    // Represents a relational expression in Calcite
}

interface RelOptCluster {
    // Container for planning environment
}

interface RelDataTypeFactory {
    // Factory for creating Calcite data types
}

interface CalciteCatalogReader {
    // Reader for accessing catalog metadata
}

class RelBuilder {
    // Builder for programmatic RelNode construction
}

interface FrameworkConfig {
    // Configuration for Calcite framework
}

Key Flink Types

// Flink types used in the API
interface QueryOperation {
    // Base interface for query operations in Flink
}

class ResolvedSchema {
    // Schema with resolved column names and types
}

interface QueryOperationVisitor<T> {
    // Visitor pattern for traversing query operations
}

class TableConfig {
    // Configuration for Flink table environment
}

class FunctionCatalog {
    // Catalog of available functions in Flink
}

interface ParserFactory.Context {
    // Context interface for parser factory
}

class CatalogRegistry {
    // Registry for managing catalog instances
}

interface SqlFactory {
    // Factory for SQL string generation
}