Apache Flink's Table API Planner Blink module providing sophisticated SQL and Table API query planning and execution engine with advanced query optimization, code generation, and comprehensive support for both streaming and batch workloads.
—
The planner factory system provides Service Provider Interface (SPI) entry points for creating and configuring planner instances within Flink's Table API ecosystem. These factories serve as the main integration points between the Blink planner and Flink's table environment.
Main factory for creating Blink planner instances, supporting both streaming and batch execution modes based on configuration properties.
/**
* Factory to construct a BatchPlanner or StreamPlanner based on configuration
*/
@Internal
public final class BlinkPlannerFactory implements PlannerFactory {
/**
* Creates a planner instance based on properties and dependencies
* @param properties Configuration properties including streaming mode setting
* @param executor Executor instance for job submission
* @param tableConfig Table configuration settings
* @param functionCatalog Function catalog for UDF management
* @param catalogManager Catalog manager for metadata access
* @return StreamPlanner for streaming mode, BatchPlanner for batch mode
*/
public Planner create(
Map<String, String> properties,
Executor executor,
TableConfig tableConfig,
FunctionCatalog functionCatalog,
CatalogManager catalogManager);
/**
* Returns optional context properties for this factory
* @return Map containing factory class name
*/
public Map<String, String> optionalContext();
/**
* Returns required context properties (empty for Blink factory)
* @return Empty properties map
*/
public Map<String, String> requiredContext();
/**
* Lists configuration properties supported by this factory
* @return List containing STREAMING_MODE and CLASS_NAME properties
*/
public List<String> supportedProperties();
}Usage Example:
import org.apache.flink.table.planner.delegation.BlinkPlannerFactory;
import org.apache.flink.table.api.EnvironmentSettings;
// Factory is typically used internally by EnvironmentSettings
EnvironmentSettings settings = EnvironmentSettings.newInstance()
.useBlinkPlanner() // This sets BlinkPlannerFactory as the factory
.inStreamingMode()
.build();
// Manual factory usage (advanced)
BlinkPlannerFactory factory = new BlinkPlannerFactory();
Map<String, String> properties = new HashMap<>();
properties.put(EnvironmentSettings.STREAMING_MODE, "true");
Planner planner = factory.create(properties, executor, tableConfig,
functionCatalog, catalogManager);Factory for creating executor instances that handle job submission and execution coordination.
/**
* Factory for creating executors in the Blink planner environment
*/
@Internal
public final class BlinkExecutorFactory implements ExecutorFactory {
/**
* Creates an executor instance based on configuration properties
* @param properties Configuration properties for executor creation
* @return Executor instance for job submission
*/
public Executor create(Map<String, String> properties);
/**
* Returns optional context properties for this factory
* @return Map containing factory configuration
*/
public Map<String, String> optionalContext();
/**
* Returns required context properties for executor creation
* @return Required properties map
*/
public Map<String, String> requiredContext();
/**
* Lists configuration properties supported by this factory
* @return List of supported property keys
*/
public List<String> supportedProperties();
}Factory for creating SQL parser instances used for parsing and validating SQL statements.
/**
* Factory for creating SQL parser instances
*/
@Internal
public final class DefaultParserFactory implements ParserFactory {
/**
* Creates a parser instance for SQL statement processing
* @param context Parser creation context and configuration
* @return Parser instance for SQL parsing and validation
*/
public Parser create(ParserContext context);
/**
* Returns optional context properties for parser creation
* @return Optional context properties
*/
public Map<String, String> optionalContext();
/**
* Returns required context properties for parser creation
* @return Required context properties
*/
public Map<String, String> requiredContext();
/**
* Lists configuration properties supported by this parser factory
* @return List of supported property keys
*/
public List<String> supportedProperties();
}Controls whether the planner operates in streaming or batch mode:
// From EnvironmentSettings
public static final String STREAMING_MODE = "table.exec.mode.streaming";
// Values:
// "true" - Creates StreamPlanner for streaming execution
// "false" - Creates BatchPlanner for batch executionSpecifies which planner factory to use:
// From EnvironmentSettings
public static final String CLASS_NAME = "table.exec.planner-factory";
// Value: "org.apache.flink.table.planner.delegation.BlinkPlannerFactory"The factories are registered via Java SPI mechanism in:
META-INF/services/org.apache.flink.table.factories.TableFactory
org.apache.flink.table.planner.delegation.BlinkPlannerFactory
org.apache.flink.table.planner.delegation.BlinkExecutorFactory
org.apache.flink.table.planner.delegation.DefaultParserFactoryThis registration allows Flink's table environment to automatically discover and use the Blink planner when configured appropriately.
The factory system integrates seamlessly with Flink's EnvironmentSettings:
// Streaming mode with Blink planner
EnvironmentSettings streamSettings = EnvironmentSettings.newInstance()
.useBlinkPlanner()
.inStreamingMode()
.build();
// Batch mode with Blink planner
EnvironmentSettings batchSettings = EnvironmentSettings.newInstance()
.useBlinkPlanner()
.inBatchMode()
.build();
// Create table environment
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, streamSettings);Install with Tessl CLI
npx tessl i tessl/maven-org-apache-flink--flink-table-planner-blink-2-12