tessl install tessl/maven-org-apache-flink--flink-table-planner_2-12@1.20.0Apache Flink's query planning and optimization module that translates Table/SQL API operations into optimized execution plans
Apache Flink's Table Planner is a comprehensive query planning and optimization library that bridges Flink's Table/SQL API with the streaming and batch runtime execution engine. This module serves as the core translation layer that converts high-level SQL queries and table operations into optimized Flink execution plans, leveraging Apache Calcite for advanced query optimization techniques.
Important Note: This library is primarily designed for framework integration and advanced customization. Most Flink users should use the high-level Table API (flink-table-api-java or flink-table-api-scala) instead of directly interacting with planner internals.
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-planner_2.12</artifactId>
<version>1.20.2</version>
</dependency>import org.apache.flink.table.planner.delegation.DefaultParserFactory;
import org.apache.flink.table.planner.delegation.FlinkSqlParserFactories;
import org.apache.flink.table.planner.typeutils.RowTypeUtils;
import org.apache.flink.table.planner.expressions.ColumnReferenceFinder;
import org.apache.flink.table.planner.operations.DeletePushDownUtils;The Table Planner is typically used indirectly through Flink's Table API, but advanced users can access specific utilities:
import org.apache.flink.table.planner.delegation.DefaultParserFactory;
import org.apache.flink.table.delegation.ParserFactory;
import org.apache.flink.table.api.SqlDialect;
// Create a parser factory for default SQL dialect
ParserFactory parserFactory = new DefaultParserFactory();
String dialectId = parserFactory.factoryIdentifier(); // "default"
// The factory is used internally by Flink's table environmentimport org.apache.flink.table.planner.typeutils.RowTypeUtils;
import org.apache.flink.table.types.logical.RowType;
// Project row type using column indices
RowType originalType = /* ... */;
int[] projection = {0, 2, 4}; // Select columns 0, 2, and 4
RowType projectedType = RowTypeUtils.projectRowType(originalType, projection);The Table Planner consists of several key components:
Factory for creating SQL parsers compatible with Flink's default SQL dialect.
/**
* Factory for creating SQL parsers using Flink's default SQL dialect
*/
public class DefaultParserFactory implements ParserFactory {
/** Returns the factory identifier for the default SQL dialect */
public String factoryIdentifier();
/** Returns required configuration options (empty for default parser) */
public Set<ConfigOption<?>> requiredOptions();
/** Returns optional configuration options (empty for default parser) */
public Set<ConfigOption<?>> optionalOptions();
/** Creates a new Parser instance using the provided context */
public Parser create(Context context);
}Utility for creating SqlParserImplFactory instances with specific SQL conformance settings. This is a utility class with a private constructor - all methods are static and the class cannot be instantiated.
/**
* Utility class for creating SqlParserImplFactory according to SqlConformance
*/
public class FlinkSqlParserFactories {
/**
* Creates a SqlParserImplFactory for the given SQL conformance level
* @param conformance SQL conformance settings
* @return configured SqlParserImplFactory
*/
public static SqlParserImplFactory create(SqlConformance conformance);
}Utilities for manipulating and deriving row types used in query planning. This is a utility class with a private constructor - all methods are static and the class cannot be instantiated.
/**
* Utilities for deriving row types of RelNodes
*/
public class RowTypeUtils {
/**
* Generates a unique column name that doesn't conflict with existing names
* @param oldName original column name
* @param checklist list of existing names to avoid conflicts
* @return unique column name
*/
public static String getUniqueName(String oldName, List<String> checklist);
/**
* Generates unique column names for a list of names
* @param oldNames original column names
* @param checklist list of existing names to avoid conflicts
* @return list of unique column names
*/
public static List<String> getUniqueName(List<String> oldNames, List<String> checklist);
/**
* Projects a row type using specified column indices
* @param rowType original row type
* @param projection array of column indices to include
* @return projected row type containing only specified columns
*/
public static RowType projectRowType(@Nonnull RowType rowType, @Nonnull int[] projection);
}Utilities for analyzing column references in resolved expressions. This is a utility class with a private constructor - all methods are static and the class cannot be instantiated.
/**
* Finder for analyzing referenced column names in ResolvedExpressions
*/
public class ColumnReferenceFinder {
/**
* Finds all columns referenced by a specific column in the schema
* @param columnName name of the column to analyze
* @param schema resolved schema containing column definitions
* @return set of referenced column names
*/
public static Set<String> findReferencedColumn(String columnName, ResolvedSchema schema);
/**
* Finds all columns referenced in watermark expressions within the schema
* @param schema resolved schema containing watermark definitions
* @return set of column names referenced in watermark expressions
*/
public static Set<String> findWatermarkReferencedColumn(ResolvedSchema schema);
}Utilities for delete push-down operations to optimize delete performance. This is a utility class with a private constructor - all methods are static and the class cannot be instantiated.
/**
* Utility class for delete push-down operations
*/
public class DeletePushDownUtils {
/**
* Retrieves the DynamicTableSink for delete push-down operations
* @param contextResolvedTable resolved table context
* @param tableModify logical table modification operation
* @param catalogManager catalog manager for table resolution
* @return optional DynamicTableSink if push-down is supported
*/
public static Optional<DynamicTableSink> getDynamicTableSink(
ContextResolvedTable contextResolvedTable,
LogicalTableModify tableModify,
CatalogManager catalogManager
);
/**
* Gets resolved filter expressions from DELETE WHERE clause
* @param tableModify logical table modification operation containing WHERE clause
* @return optional list of resolved expressions, empty if WHERE clause contains subqueries
*/
public static Optional<List<ResolvedExpression>> getResolvedFilterExpressions(
LogicalTableModify tableModify
);
}Visitor pattern implementation for extracting SqlAggFunction from CallExpression. This visitor converts Flink Table API aggregate function calls into Apache Calcite SqlAggFunction instances for query planning.
/**
* Visitor to extract SqlAggFunction from CallExpression
*/
public class SqlAggFunctionVisitor extends ExpressionDefaultVisitor<SqlAggFunction> {
/**
* Creates a new SqlAggFunctionVisitor with the specified RelBuilder
* @param relBuilder RelBuilder instance for creating SQL aggregate functions
*/
public SqlAggFunctionVisitor(RelBuilder relBuilder);
/**
* Visits a CallExpression and extracts the corresponding SqlAggFunction
* @param call CallExpression representing an aggregate function call
* @return SqlAggFunction if the call represents a supported aggregate function
* @throws TableException if the expression is not a supported aggregate function
*/
public SqlAggFunction visit(CallExpression call);
/**
* Default method for unsupported expressions
* @param expression expression to visit
* @return never returns normally
* @throws TableException always thrown for unsupported expressions
*/
public SqlAggFunction defaultMethod(Expression expression);
}Supported Built-in Aggregate Functions:
The visitor maintains an internal mapping (AGG_DEF_SQL_OPERATOR_MAPPING) that maps Flink's built-in aggregate function definitions to their corresponding Apache Calcite SqlAggFunction implementations.
/** Context interface for parser factory creation */
interface ParserFactory.Context {
// Context methods for accessing catalog and planner information
}
/** Configuration option interface for factory configuration */
interface ConfigOption<T> {
// Configuration option methods
}
/** Schema resolution interface */
interface ResolvedSchema {
// Schema methods for column and type information
}
/** Table modification operation interface */
interface LogicalTableModify {
// Logical operation methods
}
/** Dynamic table sink interface for optimized operations */
interface DynamicTableSink {
// Sink methods for data output
}
/** Catalog manager interface */
interface CatalogManager {
// Catalog management methods
}
/** Resolved table context interface */
interface ContextResolvedTable {
// Table context methods
}
/** Row type interface representing table structure */
interface RowType {
// Row type methods for field access
}
/** SQL conformance settings */
enum SqlConformance {
// SQL conformance levels
}
/** Expression interface for query expressions */
interface Expression {
// Expression methods
}
/** Resolved expression interface for analyzed expressions */
interface ResolvedExpression extends Expression {
// Resolved expression methods for type-checked expressions
}The Table Planner may throw the following exceptions:
Common Error Scenarios:
This library is intended for:
For typical Flink applications, use the high-level Table API classes instead:
org.apache.flink.table.api.TableEnvironmentorg.apache.flink.table.api.Tableorg.apache.flink.table.api.Schema