or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

configuration.mdconnector-integration.mdcore-planning.mdenums-constants.mdexecution-nodes.mdfactory-classes.mdindex.mdtype-system.md

factory-classes.mddocs/

0

# Factory Classes and Service Providers

1

2

Factory classes provide the entry points for creating planner components through the Service Provider Interface (SPI). These factories are automatically discovered and instantiated by Flink's factory system.

3

4

## Package Information

5

6

```java

7

import org.apache.flink.table.planner.delegation.DefaultPlannerFactory;

8

import org.apache.flink.table.planner.delegation.DefaultParserFactory;

9

import org.apache.flink.table.planner.delegation.DefaultExecutorFactory;

10

import org.apache.flink.table.factories.PlannerFactory;

11

import org.apache.flink.table.factories.ParserFactory;

12

import org.apache.flink.table.factories.ExecutorFactory;

13

```

14

15

## Capabilities

16

17

### Default Planner Factory

18

19

Creates the default Planner implementation based on runtime mode (streaming or batch).

20

21

```java { .api }

22

public final class DefaultPlannerFactory implements PlannerFactory {

23

public String factoryIdentifier();

24

public Planner create(Context context);

25

public Set<ConfigOption<?>> requiredOptions();

26

public Set<ConfigOption<?>> optionalOptions();

27

}

28

```

29

30

The `DefaultPlannerFactory` is the primary factory for creating planner instances. It automatically determines whether to create a `StreamPlanner` or `BatchPlanner` based on the runtime mode specified in the configuration.

31

32

**Key Methods:**

33

- `factoryIdentifier()`: Returns `PlannerFactory.DEFAULT_IDENTIFIER` to identify this factory

34

- `create(Context context)`: Creates either `StreamPlanner` (streaming mode) or `BatchPlanner` (batch mode)

35

- `requiredOptions()`: Returns empty set - no required configuration options

36

- `optionalOptions()`: Returns empty set - no optional configuration options

37

38

**Usage Example:**

39

40

```java

41

import org.apache.flink.table.planner.delegation.DefaultPlannerFactory;

42

import org.apache.flink.table.factories.PlannerFactory;

43

44

// Factory is typically discovered automatically via SPI

45

PlannerFactory factory = new DefaultPlannerFactory();

46

String identifier = factory.factoryIdentifier(); // Returns "default"

47

48

// Create planner with context

49

Planner planner = factory.create(context);

50

```

51

52

### Default Parser Factory

53

54

Creates ParserImpl instances for SQL parsing using Apache Calcite.

55

56

```java { .api }

57

public class DefaultParserFactory implements ParserFactory {

58

public String factoryIdentifier();

59

public Parser create(Context context);

60

public Set<ConfigOption<?>> requiredOptions();

61

public Set<ConfigOption<?>> optionalOptions();

62

}

63

```

64

65

The `DefaultParserFactory` creates instances of `ParserImpl` which handle SQL statement parsing, identifier parsing, and SQL expression parsing.

66

67

**Key Methods:**

68

- `factoryIdentifier()`: Returns the default SQL dialect name in lowercase

69

- `create(Context context)`: Creates a new `ParserImpl` instance

70

- `requiredOptions()`: Returns empty set - no required configuration options

71

- `optionalOptions()`: Returns empty set - no optional configuration options

72

73

**Usage Example:**

74

75

```java

76

import org.apache.flink.table.planner.delegation.DefaultParserFactory;

77

import org.apache.flink.table.factories.ParserFactory;

78

79

// Create parser factory

80

ParserFactory parserFactory = new DefaultParserFactory();

81

String identifier = parserFactory.factoryIdentifier(); // Returns "default"

82

83

// Create parser with context

84

Parser parser = parserFactory.create(context);

85

86

// Parse SQL statements

87

List<Operation> operations = parser.parse("SELECT * FROM my_table");

88

```

89

90

### Default Executor Factory

91

92

Creates DefaultExecutor instances for executing table programs.

93

94

```java { .api }

95

public final class DefaultExecutorFactory implements ExecutorFactory {

96

public Executor create(Context context);

97

public String factoryIdentifier();

98

public Set<ConfigOption<?>> requiredOptions();

99

public Set<ConfigOption<?>> optionalOptions();

100

}

101

```

102

103

The `DefaultExecutorFactory` creates instances of `DefaultExecutor` which handle the execution of table programs converted from the planner.

104

105

**Key Methods:**

106

- `factoryIdentifier()`: Returns the factory identifier for this executor

107

- `create(Context context)`: Creates a new `DefaultExecutor` instance

108

- `requiredOptions()`: Returns empty set - no required configuration options

109

- `optionalOptions()`: Returns empty set - no optional configuration options

110

111

**Usage Example:**

112

113

```java

114

import org.apache.flink.table.planner.delegation.DefaultExecutorFactory;

115

import org.apache.flink.table.factories.ExecutorFactory;

116

117

// Create executor factory

118

ExecutorFactory executorFactory = new DefaultExecutorFactory();

119

String identifier = executorFactory.factoryIdentifier();

120

121

// Create executor with context

122

Executor executor = executorFactory.create(context);

123

124

// Execute table programs

125

JobExecutionResult result = executor.execute(transformations);

126

```

127

128

## Service Provider Interface (SPI) Registration

129

130

These factories are automatically registered through Java's Service Provider Interface mechanism. The registration is defined in:

131

132

```

133

META-INF/services/org.apache.flink.table.factories.Factory

134

```

135

136

This file contains the fully qualified class names:

137

- `org.apache.flink.table.planner.delegation.DefaultPlannerFactory`

138

- `org.apache.flink.table.planner.delegation.DefaultParserFactory`

139

- `org.apache.flink.table.planner.delegation.DefaultExecutorFactory`

140

141

## Factory Context

142

143

All factories receive a `Context` object that provides access to:

144

145

```java { .api }

146

public interface Context {

147

Configuration getConfiguration();

148

ClassLoader getClassLoader();

149

TableEnvironment getTableEnvironment();

150

}

151

```

152

153

The context allows factories to:

154

- Access configuration settings for customizing behavior

155

- Use the appropriate class loader for loading resources

156

- Access the table environment for integration with existing components

157

158

## Integration with Table Environment

159

160

The factories are typically used indirectly when creating a `TableEnvironment`:

161

162

```java

163

import org.apache.flink.table.api.EnvironmentSettings;

164

import org.apache.flink.table.api.TableEnvironment;

165

166

// The planner factory is used internally when creating the environment

167

TableEnvironment tableEnv = TableEnvironment.create(

168

EnvironmentSettings.newInstance()

169

.useBlinkPlanner() // Uses DefaultPlannerFactory

170

.build()

171

);

172

```

173

174

## Factory Discovery Process

175

176

Flink's factory system uses the following discovery process:

177

178

1. **SPI Discovery**: Scan the classpath for `META-INF/services/org.apache.flink.table.factories.Factory` files

179

2. **Factory Instantiation**: Create instances of all discovered factory classes

180

3. **Identifier Matching**: Match factory identifiers with requested components

181

4. **Context Creation**: Create appropriate context objects with configuration

182

5. **Component Creation**: Call factory methods to create the actual components

183

184

This design enables pluggable architectures where different planner implementations can be provided by different modules or third-party libraries.