CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/maven-com-alibaba-csp--sentinel-cluster-client-default

Default cluster client implementation for Sentinel cluster flow control

Pending
Overview
Eval results
Files

network-transport.mddocs/

Network Transport

The network transport layer provides high-performance Netty-based communication with cluster servers including connection management, reconnection logic, and request/response processing.

Capabilities

NettyTransportClient

Low-level transport client that handles the actual network communication with cluster servers using Netty.

/**
 * Netty-based transport client for cluster server communication
 */
public class NettyTransportClient implements ClusterTransportClient {
    /**
     * Create transport client for specified server
     * @param host server hostname or IP address (must not be blank)
     * @param port server port number (must be > 0)
     */
    public NettyTransportClient(String host, int port);
    
    /**
     * Start the transport client and establish connection
     * @throws Exception if connection setup fails
     */
    public void start() throws Exception;
    
    /**
     * Stop the transport client and close connections
     * @throws Exception if shutdown fails
     */
    public void stop() throws Exception;
    
    /**
     * Check if transport client is ready to send requests
     * @return true if connected and ready, false otherwise
     */
    public boolean isReady();
    
    /**
     * Send cluster request and wait for response
     * @param request cluster request to send (must not be null with valid type)
     * @return cluster response from server
     * @throws Exception if request fails or times out
     */
    public ClusterResponse sendRequest(ClusterRequest request) throws Exception;
}

Usage Examples:

import com.alibaba.csp.sentinel.cluster.client.NettyTransportClient;
import com.alibaba.csp.sentinel.cluster.request.ClusterRequest;
import com.alibaba.csp.sentinel.cluster.response.ClusterResponse;
import com.alibaba.csp.sentinel.cluster.ClusterConstants;

// Create and start transport client
NettyTransportClient client = new NettyTransportClient("cluster-server", 8719);
try {
    client.start();
    
    // Wait for connection to be established
    while (!client.isReady()) {
        Thread.sleep(100);
    }
    
    // Send ping request
    ClusterRequest pingRequest = new ClusterRequest<>(ClusterConstants.MSG_TYPE_PING, null);
    ClusterResponse response = client.sendRequest(pingRequest);
    
    if (response.getStatus() == ClusterConstants.RESPONSE_STATUS_SUCCESS) {
        System.out.println("Ping successful");
    }
    
} finally {
    client.stop();
}

Connection Management

The transport client automatically handles connection lifecycle and recovery.

Connection States:

  • CLIENT_STATUS_OFF: Client is stopped or not connected
  • CLIENT_STATUS_PENDING: Client is attempting to connect (used internally)
  • CLIENT_STATUS_STARTED: Client is connected and ready

Note: The DefaultClusterTokenClient's getState() method simplifies the transport client state to only OFF or STARTED, but internally the transport client uses PENDING during connection attempts.

Connection Features:

// Automatic reconnection with exponential backoff
public static final int RECONNECT_DELAY_MS = 2000; // Base reconnection delay

// Connection monitoring
if (client.isReady()) {
    // Safe to send requests
    ClusterResponse response = client.sendRequest(request);
} else {
    // Client not ready - connection may be establishing
    System.out.println("Client not ready for requests");
}

Reconnection Behavior:

// The client automatically reconnects with increasing delays:
// First failure: reconnect after 2 seconds
// Second failure: reconnect after 4 seconds  
// Third failure: reconnect after 6 seconds
// etc.

// Reconnection can be controlled by stopping the client
client.stop(); // Disables automatic reconnection
client.start(); // Re-enables reconnection

Request Processing

The transport client handles synchronous request/response communication with timeout support.

Request Flow:

  1. Validate request parameters
  2. Generate unique request ID
  3. Send request over network
  4. Wait for response with timeout
  5. Return response or throw timeout exception

Error Handling:

try {
    ClusterResponse response = client.sendRequest(request);
    // Process successful response
} catch (SentinelClusterException e) {
    if (e.getMessage().contains("REQUEST_TIME_OUT")) {
        System.err.println("Request timed out");
    } else if (e.getMessage().contains("CLIENT_NOT_READY")) {
        System.err.println("Client not connected");
    } else if (e.getMessage().contains("BAD_REQUEST")) {
        System.err.println("Invalid request format");
    }
} catch (Exception e) {
    System.err.println("Network error: " + e.getMessage());
}

Network Configuration

The transport client uses optimized Netty settings for cluster communication.

Netty Configuration:

  • Channel Type: NioSocketChannel for non-blocking I/O
  • TCP_NODELAY: Enabled for low latency
  • Connection Pooling: PooledByteBufAllocator for efficient memory management
  • Frame Encoding: Length-field based framing for reliable message boundaries

Timeout Configuration:

import com.alibaba.csp.sentinel.cluster.client.config.ClusterClientConfigManager;

// Connection timeout (from config)
int connectTimeout = ClusterClientConfigManager.getConnectTimeout();

// Request timeout (from config)  
int requestTimeout = ClusterClientConfigManager.getRequestTimeout();

// These timeouts are automatically applied by the transport client

Pipeline Configuration

The Netty pipeline includes several handlers for request/response processing:

Pipeline Components:

  1. LengthFieldBasedFrameDecoder: Handles message framing
  2. NettyResponseDecoder: Decodes incoming responses
  3. LengthFieldPrepender: Adds length headers to outgoing messages
  4. NettyRequestEncoder: Encodes outgoing requests
  5. TokenClientHandler: Manages request/response correlation

Request Correlation Management

The transport client uses a promise-based system to correlate asynchronous requests and responses.

/**
 * Utility for managing request/response correlation using Netty promises
 */
public final class TokenClientPromiseHolder {
    /**
     * Store a promise for a request ID
     * @param xid unique request ID
     * @param promise Netty channel promise for the request
     */
    public static void putPromise(int xid, ChannelPromise promise);
    
    /**
     * Get promise entry for a request ID
     * @param xid unique request ID
     * @return entry containing promise and response, or null if not found
     */
    public static SimpleEntry<ChannelPromise, ClusterResponse> getEntry(int xid);
    
    /**
     * Remove promise entry for a request ID
     * @param xid unique request ID to remove
     */
    public static void remove(int xid);
    
    /**
     * Complete a promise with response data
     * @param xid unique request ID
     * @param response cluster response to associate with promise
     * @return true if promise was completed successfully
     */
    public static <T> boolean completePromise(int xid, ClusterResponse<T> response);
}

Thread Safety

The NettyTransportClient is thread-safe for concurrent request sending, but each client instance should only be started and stopped from a single thread.

Safe Concurrent Usage:

NettyTransportClient client = new NettyTransportClient("server", 8719);
client.start();

// Multiple threads can safely send requests concurrently
ExecutorService executor = Executors.newFixedThreadPool(10);
for (int i = 0; i < 100; i++) {
    final int requestId = i;
    executor.submit(() -> {
        try {
            ClusterRequest request = createRequest(requestId);
            ClusterResponse response = client.sendRequest(request);
            processResponse(response);
        } catch (Exception e) {
            handleError(e);
        }
    });
}

executor.shutdown();
client.stop();

Resource Management

The transport client properly manages network resources and cleanup.

Resource Cleanup:

  • Netty EventLoopGroup shutdown
  • Channel closure and cleanup
  • Connection pool resource release
  • Request correlation cleanup

Proper Shutdown:

NettyTransportClient client = new NettyTransportClient("server", 8719);
try {
    client.start();
    // Use client for requests
} finally {
    // Always stop client to release resources
    client.stop();
}

// Or with try-with-resources pattern (if implementing AutoCloseable)
// Currently NettyTransportClient does not implement AutoCloseable

Performance Considerations

Connection Reuse:

  • Create one transport client per target server
  • Reuse client instances across multiple requests
  • Avoid creating new clients for each request

Request Batching:

  • Send multiple requests concurrently rather than sequentially
  • Use connection pooling for high-throughput scenarios
  • Monitor connection state before sending requests

Memory Management:

  • Uses pooled ByteBuf allocators for efficient memory usage
  • Automatic cleanup of request correlation data
  • Proper resource disposal on client shutdown

Install with Tessl CLI

npx tessl i tessl/maven-com-alibaba-csp--sentinel-cluster-client-default

docs

cluster-token-client.md

codec-system.md

command-interface.md

configuration.md

index.md

network-transport.md

tile.json