or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

catalog-integration.mdcode-generation.mdexpression-system.mdfunction-integration.mdindex.mdplanner-factory.mdquery-planning.md
tile.json

tessl/maven-org-apache-flink--flink-table-planner-blink-2-12

Apache Flink's Table API Planner Blink module providing sophisticated SQL and Table API query planning and execution engine with advanced query optimization, code generation, and comprehensive support for both streaming and batch workloads.

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
mavenpkg:maven/org.apache.flink/flink-table-planner-blink_2.12@1.13.x

To install, run

npx @tessl/cli install tessl/maven-org-apache-flink--flink-table-planner-blink-2-12@1.13.0

index.mddocs/

Apache Flink Table Planner Blink

Apache Flink's Table API Planner Blink module provides a sophisticated SQL and Table API query planning and execution engine that bridges high-level declarative queries with Flink's streaming and batch runtime execution. Built primarily in Scala with extensive Java integration, it provides advanced query optimization capabilities including cost-based optimization, code generation for high-performance execution, and comprehensive support for both streaming and batch workloads.

Package Information

  • Package Name: flink-table-planner-blink_2.12
  • Package Type: Maven
  • Group ID: org.apache.flink
  • Version: 1.13.6
  • Language: Scala/Java
  • Installation:
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-table-planner-blink_2.12</artifactId>
      <version>1.13.6</version>
    </dependency>

Core Imports

// Factory for creating planner instances
import org.apache.flink.table.planner.delegation.BlinkPlannerFactory;
import org.apache.flink.table.planner.delegation.BlinkExecutorFactory;

// Main planner classes
import org.apache.flink.table.planner.delegation.StreamPlanner;
import org.apache.flink.table.planner.delegation.BatchPlanner;

Basic Usage

import org.apache.flink.table.api.*;
import org.apache.flink.table.planner.delegation.BlinkPlannerFactory;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;

// Create table environment with Blink planner for streaming
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
EnvironmentSettings settings = EnvironmentSettings.newInstance()
    .useBlinkPlanner()
    .inStreamingMode()
    .build();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, settings);

// Execute SQL queries
Table result = tableEnv.sqlQuery("SELECT * FROM source_table WHERE value > 100");

Architecture

The Flink Table Planner Blink is built around several key components:

  • Planner Factories: Service provider interface (SPI) entry points that create planner instances based on configuration
  • Planner Implementations: StreamPlanner for streaming mode, BatchPlanner for batch mode
  • Query Optimization: Multi-phase optimization pipeline with cost-based optimization using Apache Calcite
  • Code Generation: Dynamic code generation for high-performance execution using Janino compiler
  • Execution Graph: Translation from logical plans to Flink's execution graph representation
  • Apache Calcite Integration: Deep integration with Calcite for SQL parsing, validation, and optimization

Capabilities

Planner Factory System

Service provider interface (SPI) factories for creating and configuring planner instances. These factories are the main entry points for integrating the Blink planner with Flink's Table API.

public final class BlinkPlannerFactory implements PlannerFactory {
  public Planner create(Map<String, String> properties, Executor executor, 
                       TableConfig tableConfig, FunctionCatalog functionCatalog, 
                       CatalogManager catalogManager);
}

public final class BlinkExecutorFactory implements ExecutorFactory {
  public Executor create(Map<String, String> properties);
}

Planner Factory System

Query Planning and Optimization

Core query planning capabilities including logical plan creation, optimization rule application, and cost-based optimization. Supports both streaming and batch execution modes with different optimization strategies.

abstract class PlannerBase extends Planner {
  def translateToRel(modifyOperation: ModifyOperation): RelNode
  def optimize(relNodes: Seq[RelNode]): Seq[RelNode]
  def translateToPlan(execGraph: ExecNodeGraph): util.List[Transformation[_]]
}

class StreamPlanner extends PlannerBase
class BatchPlanner extends PlannerBase

Query Planning

Code Generation

Dynamic code generation system that creates efficient Java code for query execution. Generates specialized code for calculations, aggregations, projections, and other operations to achieve high performance.

object CalcCodeGenerator {
  def generateCalcOperator(
    ctx: CodeGeneratorContext,
    inputTransform: Transformation[RowData],
    outputType: RowType,
    projection: Seq[RexNode],
    condition: Option[RexNode],
    retainHeader: Boolean = false,
    opName: String
  ): CodeGenOperatorFactory[RowData]
}

object ProjectionCodeGenerator {
  def generateProjection(
    ctx: CodeGeneratorContext,
    name: String,
    inType: RowType,
    outType: RowType,
    inputMapping: Array[Int],
    outClass: Class[_ <: RowData] = classOf[BinaryRowData]
  ): GeneratedProjection
}

Code Generation

Function Integration

Comprehensive support for user-defined functions (UDFs) including scalar functions, aggregate functions, and table functions. Provides utilities for function registration, validation, and SQL integration.

object UserDefinedFunctionUtils {
  def checkForInstantiation(clazz: Class[_]): Unit
  def checkNotSingleton(clazz: Class[_]): Unit
  def getEvalMethodSignature(function: UserDefinedFunction, expectedTypes: Array[LogicalType]): Method
}

class AggSqlFunction extends SqlFunction
class ScalarSqlFunction extends SqlFunction  
class TableSqlFunction extends SqlFunction

Function Integration

Expression System

Expression handling for SQL expressions and Table API expressions, including validation, type inference, and code generation. Supports complex expressions with nested operations and custom functions.

// Expression conversion and validation
trait ExpressionConverter {
  def convertToRexNode(expression: Expression): RexNode
  def convertToExpression(rexNode: RexNode): Expression
}

Expression System

Catalog Integration

Integration with Flink's catalog system for metadata management, table registration, and schema information. Supports multiple catalog types and schema evolution.

// Catalog integration through standard Flink interfaces
// Implemented via CatalogManager and FunctionCatalog parameters

Catalog Integration

Types

// Core planner configuration
public class EnvironmentSettings {
  public static final String STREAMING_MODE = "table.exec.mode.streaming";
  public static final String CLASS_NAME = "table.exec.planner-factory";
}

// Execution graph representation  
public class ExecNodeGraph {
  public List<ExecNode<?>> getRootNodes();
  public void accept(ExecNodeVisitor visitor);
}

// Transformation representation
public abstract class Transformation<T> {
  public String getName();
  public TypeInformation<T> getOutputType();
}