or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

cors-configuration.mdhttp-channel-pipeline.mdhttp-transport.mdindex.mdnetwork-utilities.mdplugin-registration.mdtcp-transport.md

tcp-transport.mddocs/

0

# TCP Transport

1

2

Core TCP transport implementation for internal Elasticsearch cluster communication. Provides connection management, message handling, and network optimization for node-to-node communication using Netty 3.

3

4

## Capabilities

5

6

### Netty3Transport Class

7

8

Main TCP transport implementation that handles all internal cluster communication between Elasticsearch nodes.

9

10

```java { .api }

11

/**

12

* Netty 3-based TCP transport implementation for Elasticsearch cluster communication.

13

* Supports 4 types of connections per node: low/med/high/ping.

14

* - Low: for batch oriented APIs (like recovery) with high payload

15

* - Med: for typical search/single doc index operations

16

* - High: for cluster state operations

17

* - Ping: for sending ping requests to other nodes

18

*/

19

public class Netty3Transport extends TcpTransport<Channel> {

20

21

/**

22

* Constructor for creating a new Netty3Transport instance

23

* @param settings Elasticsearch configuration settings

24

* @param threadPool Thread pool for async operations

25

* @param networkService Network utility service

26

* @param bigArrays Memory management for large arrays

27

* @param namedWriteableRegistry Registry for serializable objects

28

* @param circuitBreakerService Circuit breaker for memory protection

29

*/

30

public Netty3Transport(Settings settings, ThreadPool threadPool,

31

NetworkService networkService, BigArrays bigArrays,

32

NamedWriteableRegistry namedWriteableRegistry,

33

CircuitBreakerService circuitBreakerService);

34

35

/**

36

* Returns the number of currently open server channels

37

* @return Number of open server channels

38

*/

39

public long serverOpen();

40

41

/**

42

* Configures the channel pipeline factory for client connections

43

* @return ChannelPipelineFactory for client channels

44

*/

45

public ChannelPipelineFactory configureClientChannelPipelineFactory();

46

47

/**

48

* Configures the channel pipeline factory for server connections

49

* @param name Profile name for the server

50

* @param settings Settings specific to this server profile

51

* @return ChannelPipelineFactory for server channels

52

*/

53

public ChannelPipelineFactory configureServerChannelPipelineFactory(String name, Settings settings);

54

}

55

```

56

57

### Configuration Settings

58

59

Comprehensive configuration settings for tuning TCP transport performance and behavior.

60

61

```java { .api }

62

/**

63

* Number of worker threads for handling I/O operations

64

* Default: 2 * number of available processors

65

*/

66

public static final Setting<Integer> WORKER_COUNT =

67

new Setting<>("transport.netty.worker_count", ...);

68

69

/**

70

* Maximum capacity for cumulation buffers to prevent memory issues

71

* Default: unlimited (-1)

72

*/

73

public static final Setting<ByteSizeValue> NETTY_MAX_CUMULATION_BUFFER_CAPACITY =

74

byteSizeSetting("transport.netty.max_cumulation_buffer_capacity", ...);

75

76

/**

77

* Maximum number of components in composite buffers

78

* Default: -1 (unlimited)

79

*/

80

public static final Setting<Integer> NETTY_MAX_COMPOSITE_BUFFER_COMPONENTS =

81

intSetting("transport.netty.max_composite_buffer_components", ...);

82

83

/**

84

* Size for receive buffer size predictor for optimal buffer allocation

85

* Default: 64kb

86

*/

87

public static final Setting<ByteSizeValue> NETTY_RECEIVE_PREDICTOR_SIZE =

88

byteSizeSetting("transport.netty.receive_predictor_size", ...);

89

90

/**

91

* Minimum size for receive buffer size predictor

92

* Default: 64b

93

*/

94

public static final Setting<ByteSizeValue> NETTY_RECEIVE_PREDICTOR_MIN =

95

byteSizeSetting("transport.netty.receive_predictor_min", ...);

96

97

/**

98

* Maximum size for receive buffer size predictor

99

* Default: 64kb

100

*/

101

public static final Setting<ByteSizeValue> NETTY_RECEIVE_PREDICTOR_MAX =

102

byteSizeSetting("transport.netty.receive_predictor_max", ...);

103

104

/**

105

* Number of boss threads for accepting connections

106

* Default: 1

107

*/

108

public static final Setting<Integer> NETTY_BOSS_COUNT =

109

intSetting("transport.netty.boss_count", ...);

110

```

111

112

**Usage Example:**

113

114

```java

115

import org.elasticsearch.transport.netty3.Netty3Transport;

116

import org.elasticsearch.common.settings.Settings;

117

import org.elasticsearch.threadpool.ThreadPool;

118

import org.elasticsearch.common.network.NetworkService;

119

120

// Configure transport with custom settings

121

Settings settings = Settings.builder()

122

.put("transport.netty.worker_count", 4)

123

.put("transport.netty.max_cumulation_buffer_capacity", "128mb")

124

.put("transport.netty.receive_predictor_size", "128kb")

125

.put("transport.netty.boss_count", 1)

126

.build();

127

128

// Create transport instance (typically done by Elasticsearch internally)

129

Netty3Transport transport = new Netty3Transport(

130

settings,

131

threadPool,

132

networkService,

133

bigArrays,

134

namedWriteableRegistry,

135

circuitBreakerService

136

);

137

138

// Check server status

139

long openChannels = transport.serverOpen();

140

System.out.println("Open server channels: " + openChannels);

141

```

142

143

### Channel Pipeline Configuration

144

145

The transport configures different pipeline factories for client and server channels.

146

147

```java { .api }

148

/**

149

* Client channel pipeline includes:

150

* - Size header frame decoder for message framing

151

* - Message channel handler for processing transport messages

152

* - Connection tracking and management

153

*/

154

public ChannelPipelineFactory configureClientChannelPipelineFactory() {

155

return new ChannelPipelineFactory() {

156

@Override

157

public ChannelPipeline getPipeline() throws Exception {

158

ChannelPipeline pipeline = Channels.pipeline();

159

pipeline.addLast("size", new Netty3SizeHeaderFrameDecoder());

160

pipeline.addLast("dispatcher", new Netty3MessageChannelHandler(Netty3Transport.this, logger));

161

return pipeline;

162

}

163

};

164

}

165

166

/**

167

* Server channel pipeline includes:

168

* - Open channel tracking

169

* - Size header frame decoder

170

* - Message handler for incoming requests

171

* - Profile-specific configuration

172

*/

173

public ChannelPipelineFactory configureServerChannelPipelineFactory(String name, Settings settings) {

174

return new ChannelPipelineFactory() {

175

@Override

176

public ChannelPipeline getPipeline() throws Exception {

177

ChannelPipeline pipeline = Channels.pipeline();

178

pipeline.addLast("open_channels", openChannelsHandler);

179

pipeline.addLast("size", new Netty3SizeHeaderFrameDecoder());

180

pipeline.addLast("dispatcher", new Netty3MessageChannelHandler(Netty3Transport.this, logger, name));

181

return pipeline;

182

}

183

};

184

}

185

```

186

187

### Buffer Management

188

189

The transport uses Netty 3's buffer management system with configurable sizing and optimization:

190

191

- **Cumulation Buffers**: Used for assembling fragmented messages with configurable maximum capacity

192

- **Composite Buffers**: Enable zero-copy operations by composing multiple buffer segments

193

- **Receive Predictors**: Dynamically adjust buffer sizes based on actual network traffic patterns

194

- **Worker Thread Pool**: Handles I/O operations with configurable thread count based on system capabilities

195

196

### Connection Types

197

198

The transport supports multiple connection types for different traffic patterns:

199

200

- **Low Priority**: Batch operations like recovery with high payload that could block regular requests

201

- **Medium Priority**: Standard operations like search queries and single document indexing

202

- **High Priority**: Critical cluster operations like cluster state updates

203

- **Ping**: Dedicated connections for node health checks and discovery

204

205

### Error Handling

206

207

The transport includes comprehensive error handling for network failures, connection timeouts, and message corruption, with automatic retry mechanisms and circuit breaker integration for system protection.