Netty 4 based transport implementation plugin for Elasticsearch providing high-performance networking layer for HTTP and node-to-node communications
—
The TCP transport layer provides high-performance, non-blocking I/O for node-to-node communication in Elasticsearch clusters. Built on Netty 4, it handles cluster coordination, data replication, search requests, and all internal messaging between cluster nodes.
Core TCP transport implementation extending Elasticsearch's TcpTransport base class with Netty4-specific networking.
/**
* Netty4 implementation of TCP transport for node-to-node communication
* Handles cluster messaging, data replication, and search coordination
*/
public class Netty4Transport extends TcpTransport {
// Configuration settings for performance tuning
public static final Setting<Integer> WORKER_COUNT;
public static final Setting<ByteSizeValue> NETTY_RECEIVE_PREDICTOR_SIZE;
public static final Setting<ByteSizeValue> NETTY_RECEIVE_PREDICTOR_MIN;
public static final Setting<ByteSizeValue> NETTY_RECEIVE_PREDICTOR_MAX;
public static final Setting<Integer> NETTY_BOSS_COUNT;
}Performance and resource configuration settings for optimal cluster communication.
/**
* Number of worker threads for handling network I/O operations
* Default: number of available processors
*/
public static final Setting<Integer> WORKER_COUNT = new Setting<>(
"transport.netty.worker_count",
(s) -> Integer.toString(EsExecutors.allocatedProcessors(s)),
(s) -> Setting.parseInt(s, 1, "transport.netty.worker_count"),
Property.NodeScope
);
/**
* Initial buffer size for receive buffer prediction
* Netty adaptively adjusts buffer sizes based on actual data patterns
*/
public static final Setting<ByteSizeValue> NETTY_RECEIVE_PREDICTOR_SIZE = Setting.byteSizeSetting(
"transport.netty.receive_predictor_size",
new ByteSizeValue(64, ByteSizeUnit.KB),
Property.NodeScope
);
/**
* Minimum receive buffer size for adaptive allocation
*/
public static final Setting<ByteSizeValue> NETTY_RECEIVE_PREDICTOR_MIN = byteSizeSetting(
"transport.netty.receive_predictor_min",
NETTY_RECEIVE_PREDICTOR_SIZE,
Property.NodeScope
);
/**
* Maximum receive buffer size for adaptive allocation
*/
public static final Setting<ByteSizeValue> NETTY_RECEIVE_PREDICTOR_MAX = byteSizeSetting(
"transport.netty.receive_predictor_max",
NETTY_RECEIVE_PREDICTOR_SIZE,
Property.NodeScope
);
/**
* Number of boss threads for accepting connections
* Usually 1 is sufficient for most deployments
*/
public static final Setting<Integer> NETTY_BOSS_COUNT = intSetting(
"transport.netty.boss_count",
1,
1,
Property.NodeScope
);Configuration Example:
Settings transportSettings = Settings.builder()
.put("transport.netty.worker_count", 4)
.put("transport.netty.receive_predictor_size", "128kb")
.put("transport.netty.receive_predictor_min", "64kb")
.put("transport.netty.receive_predictor_max", "256kb")
.put("transport.netty.boss_count", 1)
.build();Client-side TCP channel implementation for outbound connections to other cluster nodes.
/**
* Netty4 implementation of TCP channel for node-to-node communication
* Handles connection lifecycle and message transmission
*/
public class Netty4TcpChannel implements TcpChannel {
/**
* Adds listener to be notified when channel closes
* @param listener Callback invoked on channel close
*/
void addCloseListener(ActionListener<Void> listener);
/**
* Checks if channel is currently open and active
* @return true if channel is open, false otherwise
*/
boolean isOpen();
/**
* Gets local socket address for this channel
* @return Local InetSocketAddress
*/
InetSocketAddress getLocalAddress();
/**
* Gets remote socket address for this channel
* @return Remote InetSocketAddress of connected peer
*/
InetSocketAddress getRemoteAddress();
/**
* Sends message bytes to remote peer asynchronously
* @param reference Message bytes to send
* @param listener Callback for send completion or failure
*/
void sendMessage(BytesReference reference, ActionListener<Void> listener);
/**
* Closes the channel and releases resources
*/
void close();
}Usage Example:
// Channel created by transport layer
Netty4TcpChannel channel = // obtained from transport
// Send message to remote node
BytesReference messageData = // serialized message
channel.sendMessage(messageData, new ActionListener<Void>() {
@Override
public void onResponse(Void response) {
// Message sent successfully
}
@Override
public void onFailure(Exception e) {
// Handle send failure
}
});
// Monitor channel lifecycle
channel.addCloseListener(ActionListener.wrap(
() -> logger.info("Channel closed"),
e -> logger.error("Channel close error", e)
));Server-side TCP channel implementation for accepting inbound connections from other cluster nodes.
/**
* Netty4 server channel implementation for accepting TCP connections
* Binds to local address and accepts connections from cluster peers
*/
public class Netty4TcpServerChannel implements TcpServerChannel {
/**
* Adds listener to be notified when server channel closes
* @param listener Callback invoked on channel close
*/
void addCloseListener(ActionListener<Void> listener);
/**
* Checks if server channel is currently open and accepting connections
* @return true if channel is open, false otherwise
*/
boolean isOpen();
/**
* Gets local socket address this server is bound to
* @return Local InetSocketAddress for bound server socket
*/
InetSocketAddress getLocalAddress();
/**
* Closes the server channel and stops accepting new connections
*/
void close();
}Usage Example:
// Server channel created by transport layer
Netty4TcpServerChannel serverChannel = // obtained from transport
// Monitor server channel lifecycle
serverChannel.addCloseListener(ActionListener.wrap(
() -> logger.info("Server channel closed"),
e -> logger.error("Server channel close error", e)
));
// Check if server is accepting connections
if (serverChannel.isOpen()) {
InetSocketAddress boundAddress = serverChannel.getLocalAddress();
logger.info("Transport server listening on {}", boundAddress);
}Netty pipeline handler for processing transport messages and managing channel lifecycle.
/**
* Netty channel handler for processing transport protocol messages
* Handles message framing, deserialization, and dispatch to transport layer
*/
public class Netty4MessageChannelHandler extends // Netty handlerThe TCP transport layer provides several key features:
Message Framing: Transport messages are framed with length headers for reliable message boundaries
Compression: Optional message compression to reduce network bandwidth usage
Connection Pooling: Reuses connections between nodes to minimize connection overhead
Adaptive Buffering: Netty's adaptive receive buffer allocation optimizes memory usage based on traffic patterns
Flow Control: Back-pressure mechanisms prevent overwhelming slower nodes
Connection Management: Automatic connection retry and failure detection for resilient cluster communication
The TCP transport integrates with Elasticsearch through several mechanisms:
Service Discovery: Integrates with Elasticsearch's discovery system to find cluster nodes
Message Serialization: Uses Elasticsearch's StreamInput/StreamOutput for efficient message serialization
Thread Pool Integration: Leverages Elasticsearch's thread pools for message processing
Circuit Breaker Integration: Respects circuit breakers to prevent resource exhaustion
Metrics and Monitoring: Provides transport statistics and performance metrics
Security Integration: Supports TLS encryption and authentication when configured
The transport layer is the foundation for all cluster communication including:
Install with Tessl CLI
npx tessl i tessl/maven-org-elasticsearch-plugin--transport-netty4-client