or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

configuration.mdconnector-integration.mdcore-planning.mdenums-constants.mdexecution-nodes.mdfactory-classes.mdindex.mdtype-system.md
tile.json

tessl/maven-org-apache-flink--flink-table-planner_2-11

Flink Table Planner connects Table/SQL API and runtime, responsible for translating and optimizing table programs into Flink pipelines.

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
mavenpkg:maven/org.apache.flink/flink-table-planner_2.11@1.14.x

To install, run

npx @tessl/cli install tessl/maven-org-apache-flink--flink-table-planner_2-11@1.14.0

index.mddocs/

Flink Table Planner

Flink Table Planner connects Table/SQL API and runtime, responsible for translating and optimizing table programs into Flink pipelines. This module serves as the critical bridge between Flink's declarative Table/SQL API and the runtime execution engine, leveraging Apache Calcite for sophisticated query planning, optimization, and code generation capabilities for both streaming and batch processing workloads.

Package Information

  • Package Name: org.apache.flink:flink-table-planner_2.11
  • Package Type: maven
  • Language: Scala/Java
  • Version: 1.14.6
  • Installation: Add dependency to your Maven pom.xml:
<dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-table-planner_2.11</artifactId>
    <version>1.14.6</version>
</dependency>

Core Imports

import org.apache.flink.table.planner.delegation.{DefaultPlannerFactory, DefaultParserFactory}
import org.apache.flink.table.planner.calcite.CalciteConfig
import org.apache.flink.table.factories.{PlannerFactory, ParserFactory}
import org.apache.flink.table.planner.delegation.DefaultPlannerFactory;
import org.apache.flink.table.planner.delegation.DefaultParserFactory;
import org.apache.flink.table.planner.calcite.CalciteConfig;

Basic Usage

import org.apache.flink.table.planner.delegation.DefaultPlannerFactory;
import org.apache.flink.table.planner.calcite.CalciteConfig;
import org.apache.flink.table.api.*;

// Create planner factory
PlannerFactory plannerFactory = new DefaultPlannerFactory();

// Create planner with custom configuration
CalciteConfig calciteConfig = CalciteConfig.createBuilder()
    .replaceSqlParserConfig(customParserConfig)
    .build();

// Use with TableEnvironment (typically handled automatically)
TableEnvironment tableEnv = TableEnvironment.create(
    EnvironmentSettings.newInstance()
        .useBlinkPlanner()
        .build()
);

Architecture

The flink-table-planner module consists of several key architectural components:

  • Factory Layer: Service provider implementations for creating planners, parsers, and executors
  • Planning Layer: Core planner implementations for streaming and batch processing modes
  • Configuration Layer: Calcite configuration and planner configuration management
  • Connector Layer: Integration interfaces and utilities for table sources and sinks
  • Execution Layer: ExecNode hierarchy and transformation translators
  • Type System: Bridging between Flink and Calcite type systems
  • Expression System: RexNode expressions and type inference utilities

Capabilities

Factory Classes and Service Providers

Entry points for creating planner components through the Service Provider Interface (SPI).

public final class DefaultPlannerFactory implements PlannerFactory {
    public String factoryIdentifier();
    public Planner create(Context context);
}

public class DefaultParserFactory implements ParserFactory {
    public String factoryIdentifier();
    public Parser create(Context context);
}

public final class DefaultExecutorFactory implements ExecutorFactory {
    public Executor create(Context context);
}

Factory Classes and Service Providers

Core Planning Components

Core planner implementations for streaming and batch processing modes with SQL parsing capabilities.

abstract class PlannerBase extends Planner {
  def getTableEnvironment: TableEnvironment
  def getFlinkRelBuilder: FlinkRelBuilder
  def getTypeFactory: FlinkTypeFactory
}

class StreamPlanner extends PlannerBase
class BatchPlanner extends PlannerBase
public class ParserImpl implements Parser {
    public List<Operation> parse(String statement);
    public UnresolvedIdentifier parseIdentifier(String identifier);
    public ResolvedExpression parseSqlExpression(String sqlExpression, RowType inputRowType, LogicalType outputType);
}

Core Planning Components

Configuration and Builders

Configuration classes for customizing Calcite behavior and planner settings.

trait CalciteConfig {
  def getSqlParserConfig: Option[SqlParser.Config]
  def getSqlOperatorTable: Option[SqlOperatorTable]
  def getBatchProgram: Option[FlinkChainedProgram[BatchOptimizeContext]]
  def getStreamProgram: Option[FlinkChainedProgram[StreamOptimizeContext]]
}

object CalciteConfig {
  def createBuilder(): CalciteConfigBuilder
}

class CalciteConfigBuilder {
  def replaceSqlParserConfig(sqlParserConfig: SqlParser.Config): CalciteConfigBuilder
  def replaceSqlOperatorTable(sqlOperatorTable: SqlOperatorTable): CalciteConfigBuilder
  def addSqlOperatorTable(sqlOperatorTable: SqlOperatorTable): CalciteConfigBuilder
  def replaceBatchProgram(program: FlinkChainedProgram[BatchOptimizeContext]): CalciteConfigBuilder
  def replaceStreamProgram(program: FlinkChainedProgram[StreamOptimizeContext]): CalciteConfigBuilder
  def build(): CalciteConfig
}

Configuration and Builders

Connector Integration

Interfaces and utilities for integrating custom table sources and sinks with the planner.

public interface TransformationScanProvider extends ScanTableSource.ScanRuntimeProvider {
    Transformation<RowData> createTransformation(Context context);
}

public interface TransformationSinkProvider extends DynamicTableSink.SinkRuntimeProvider {
    Transformation<?> createTransformation(Context context);
}

public final class DynamicSourceUtils {
    public static RelNode convertDataStreamToRel(StreamTableEnvironment tableEnv, DataStream<RowData> dataStream, List<String> fieldNames);
    public static RelNode convertSourceToRel(FlinkOptimizeContext optimizeContext, RelOptTable relOptTable, DynamicTableSource tableSource, FlinkStatistic statistic);
}

Connector Integration

Execution Nodes

ExecNode hierarchy and translator interfaces for query execution plan generation.

public interface ExecNode<T> extends ExecNodeTranslator<T> {
    int getId();
    String getDescription();
    LogicalType getOutputType();
    List<InputProperty> getInputProperties();
    List<ExecEdge> getInputEdges();
    void setInputEdges(List<ExecEdge> inputEdges);
    void replaceInputEdge(int index, ExecEdge newInputEdge);
    void accept(ExecNodeVisitor visitor);
}

public interface StreamExecNode<T> extends ExecNode<T> {}

public interface BatchExecNode<T> extends ExecNode<T> {}

Execution Nodes

Type System and Expressions

Type system bridging between Flink and Calcite with expression handling utilities.

public final class RexNodeExpression implements ResolvedExpression {
    public RexNode getRexNode();
    public LogicalType getOutputDataType();
    public List<ResolvedExpression> getResolvedChildren();
    public <R> R accept(ExpressionVisitor<R> visitor);
}

public final class PlannerTypeInferenceUtilImpl implements PlannerTypeInferenceUtil {
    public static final PlannerTypeInferenceUtilImpl INSTANCE;
    public TypeInference runTypeInference(CallExpression callExpression, CallContext callContext);
}

Type System and Expressions

Enums and Constants

Public enums for execution strategies, traits, and configuration constants.

public enum AggregatePhaseStrategy {
    AUTO, ONE_PHASE, TWO_PHASE
}

public enum UpdateKind {
    ONLY_UPDATE_AFTER, BEFORE_AND_AFTER
}

public enum ModifyKind {
    INSERT, UPDATE, DELETE
}

public enum MiniBatchMode {
    ProcTime, RowTime, None
}

Enums and Constants

Integration Points

Service Provider Interface (SPI)

The module registers factories via Java SPI in META-INF/services/org.apache.flink.table.factories.Factory:

  • DefaultPlannerFactory for planner creation
  • DefaultParserFactory for parser creation
  • DefaultExecutorFactory for executor creation

Calcite Integration

Extensive use of Apache Calcite for:

  • SQL parsing with customizable parser configurations
  • Query optimization through rule-based and cost-based optimization
  • Relational algebra representation and transformation

Type System Bridge

Seamless integration between Flink's type system and Calcite's type system:

  • FlinkTypeFactory for type creation and conversion
  • RexNodeExpression for expression bridging
  • Type inference utilities for function calls and expressions