0
# Configuration Management
1
2
The configuration management API provides a flexible and extensible system for managing transport layer settings and network parameters in Apache Spark. It uses a provider pattern to abstract configuration sources and includes comprehensive settings for performance tuning, security, and network behavior.
3
4
## Capabilities
5
6
### TransportConf
7
8
Central configuration management class for the transport layer, providing access to all network-related settings.
9
10
```java { .api }
11
/**
12
* Create a transport configuration with module name and config provider
13
* @param module - String identifier for the configuration module (e.g., "spark", "shuffle")
14
* @param conf - ConfigProvider instance supplying configuration values
15
*/
16
public TransportConf(String module, ConfigProvider conf);
17
18
/**
19
* Get the module name for this configuration
20
* @return String representing the module identifier
21
*/
22
public String getModule();
23
24
/**
25
* Get the underlying configuration provider
26
* @return ConfigProvider instance used by this configuration
27
*/
28
public ConfigProvider getConfigProvider();
29
```
30
31
### I/O Configuration
32
33
Settings for controlling low-level I/O behavior and performance characteristics.
34
35
```java { .api }
36
/**
37
* Get the I/O mode for network operations
38
* @return String representing the I/O mode ("NIO" or "EPOLL")
39
*/
40
public String ioMode();
41
42
/**
43
* Check if direct ByteBuffers should be preferred for network operations
44
* @return boolean indicating preference for direct buffers
45
*/
46
public boolean preferDirectBufs();
47
48
/**
49
* Get the size of receive buffers for network operations
50
* @return int representing buffer size in bytes
51
*/
52
public int receiveBuf();
53
54
/**
55
* Get the size of send buffers for network operations
56
* @return int representing buffer size in bytes
57
*/
58
public int sendBuf();
59
60
/**
61
* Check if TCP_NODELAY should be enabled
62
* @return boolean indicating if Nagle's algorithm should be disabled
63
*/
64
public boolean enableTcpKeepAlive();
65
66
/**
67
* Check if SO_REUSEADDR should be enabled
68
* @return boolean indicating if address reuse should be enabled
69
*/
70
public boolean enableTcpReuseAddr();
71
```
72
73
### Connection Management
74
75
Configuration for connection timeouts, pooling, and lifecycle management.
76
77
```java { .api }
78
/**
79
* Get the connection timeout in milliseconds
80
* @return int representing timeout for establishing connections
81
*/
82
public int connectionTimeoutMs();
83
84
/**
85
* Get the connection creation timeout in milliseconds
86
* @return int representing timeout for creating new connections
87
*/
88
public int connectionCreationTimeoutMs();
89
90
/**
91
* Get the number of connections per peer
92
* @return int representing maximum connections to maintain per remote peer
93
*/
94
public int numConnectionsPerPeer();
95
96
/**
97
* Get the maximum number of retries for connection attempts
98
* @return int representing retry count for failed connections
99
*/
100
public int maxRetries();
101
102
/**
103
* Get the retry wait time in milliseconds
104
* @return int representing wait time between connection retry attempts
105
*/
106
public int retryWaitMs();
107
108
/**
109
* Get the idle timeout for connections in milliseconds
110
* @return int representing timeout for idle connection cleanup
111
*/
112
public int connectionIdleTimeoutMs();
113
```
114
115
### Threading Configuration
116
117
Settings for controlling thread pool sizes and concurrency behavior.
118
119
```java { .api }
120
/**
121
* Get the number of server threads for handling connections
122
* @return int representing server thread pool size
123
*/
124
public int serverThreads();
125
126
/**
127
* Get the number of client threads for handling connections
128
* @return int representing client thread pool size
129
*/
130
public int clientThreads();
131
132
/**
133
* Get the number of threads for the shared client factory
134
* @return int representing shared thread pool size
135
*/
136
public int sharedClientFactoryThreads();
137
138
/**
139
* Check if client factory should be shared across contexts
140
* @return boolean indicating if client factory sharing is enabled
141
*/
142
public boolean sharedClientFactory();
143
```
144
145
### Security Configuration
146
147
Settings for encryption, authentication, and security-related features.
148
149
```java { .api }
150
/**
151
* Check if transport layer encryption is enabled
152
* @return boolean indicating if encryption should be used
153
*/
154
public boolean encryptionEnabled();
155
156
/**
157
* Check if SASL encryption is enabled
158
* @return boolean indicating if SASL encryption should be used
159
*/
160
public boolean saslEncryption();
161
162
/**
163
* Get the encryption key length in bits
164
* @return int representing key length (128, 192, or 256)
165
*/
166
public int encryptionKeyLength();
167
168
/**
169
* Get the cipher transformation for encryption
170
* @return String representing the cipher transformation (e.g., "AES/CTR/NoPadding")
171
*/
172
public String cipherTransformation();
173
174
/**
175
* Check if authentication is required
176
* @return boolean indicating if client authentication is mandatory
177
*/
178
public boolean authenticationEnabled();
179
180
/**
181
* Get the SASL authentication timeout in milliseconds
182
* @return int representing timeout for SASL authentication
183
*/
184
public int saslTimeoutMs();
185
```
186
187
### Memory Management
188
189
Configuration for memory usage, buffer management, and garbage collection optimization.
190
191
```java { .api }
192
/**
193
* Get the maximum size for in-memory shuffle blocks
194
* @return long representing maximum block size in bytes
195
*/
196
public long maxInMemoryShuffleBlockSize();
197
198
/**
199
* Get the memory fraction for off-heap storage
200
* @return double representing fraction of available memory for off-heap use
201
*/
202
public double memoryFraction();
203
204
/**
205
* Check if off-heap memory should be used
206
* @return boolean indicating if off-heap storage is enabled
207
*/
208
public boolean offHeapEnabled();
209
210
/**
211
* Get the memory map threshold for file operations
212
* @return long representing minimum file size for memory mapping
213
*/
214
public long memoryMapThreshold();
215
```
216
217
### Transfer and Streaming Settings
218
219
Configuration for data transfer behavior, streaming operations, and chunk management.
220
221
```java { .api }
222
/**
223
* Get the maximum number of chunks per TransferTo operation
224
* @return long representing maximum chunks in a single transfer
225
*/
226
public long maxChunksBeingTransferred();
227
228
/**
229
* Get the maximum size of messages that can be sent
230
* @return long representing maximum message size in bytes
231
*/
232
public long maxMessageSize();
233
234
/**
235
* Get the chunk fetch buffer size
236
* @return int representing buffer size for chunk fetching operations
237
*/
238
public int chunkFetchBufferSize();
239
240
/**
241
* Get the timeout for individual chunk fetch operations
242
* @return int representing timeout in milliseconds
243
*/
244
public int chunkFetchTimeoutMs();
245
246
/**
247
* Check if zero-copy streaming is enabled
248
* @return boolean indicating if zero-copy operations should be used
249
*/
250
public boolean zeroCopyStreaming();
251
```
252
253
## Configuration Provider
254
255
### ConfigProvider (Abstract Class)
256
257
Abstract base class for providing configuration values from various sources.
258
259
```java { .api }
260
/**
261
* Get a configuration value by name
262
* @param name - String key for the configuration property
263
* @return String value of the property, or null if not found
264
*/
265
public abstract String get(String name);
266
267
/**
268
* Get a configuration value with a default fallback
269
* @param name - String key for the configuration property
270
* @param defaultValue - String default value if property is not found
271
* @return String value of the property, or defaultValue if not found
272
*/
273
public String get(String name, String defaultValue);
274
275
/**
276
* Get a boolean configuration value with default fallback
277
* @param name - String key for the configuration property
278
* @param defaultValue - boolean default value if property is not found
279
* @return boolean value of the property, or defaultValue if not found or invalid
280
*/
281
public boolean getBoolean(String name, boolean defaultValue);
282
283
/**
284
* Get an integer configuration value with default fallback
285
* @param name - String key for the configuration property
286
* @param defaultValue - int default value if property is not found
287
* @return int value of the property, or defaultValue if not found or invalid
288
*/
289
public int getInt(String name, int defaultValue);
290
291
/**
292
* Get a long configuration value with default fallback
293
* @param name - String key for the configuration property
294
* @param defaultValue - long default value if property is not found
295
* @return long value of the property, or defaultValue if not found or invalid
296
*/
297
public long getLong(String name, long defaultValue);
298
299
/**
300
* Get a double configuration value with default fallback
301
* @param name - String key for the configuration property
302
* @param defaultValue - double default value if property is not found
303
* @return double value of the property, or defaultValue if not found or invalid
304
*/
305
public double getDouble(String name, double defaultValue);
306
307
/**
308
* Get all configuration properties as a map
309
* @return Map<String, String> containing all configuration key-value pairs
310
*/
311
public Map<String, String> getAll();
312
```
313
314
### MapConfigProvider
315
316
Map-based implementation of ConfigProvider for simple configuration scenarios.
317
318
```java { .api }
319
/**
320
* Create a configuration provider backed by a Map
321
* @param properties - Map<String, String> containing configuration properties
322
*/
323
public MapConfigProvider(Map<String, String> properties);
324
325
@Override
326
public String get(String name);
327
328
@Override
329
public Map<String, String> getAll();
330
331
/**
332
* Add or update a configuration property
333
* @param key - String property name
334
* @param value - String property value
335
*/
336
public void set(String key, String value);
337
338
/**
339
* Remove a configuration property
340
* @param key - String property name to remove
341
*/
342
public void remove(String key);
343
344
/**
345
* Clear all configuration properties
346
*/
347
public void clear();
348
```
349
350
## I/O Mode Enumeration
351
352
### IOMode
353
354
Enumeration for selecting the underlying I/O implementation.
355
356
```java { .api }
357
public enum IOMode {
358
/**
359
* Standard NIO implementation (default, works on all platforms)
360
*/
361
NIO,
362
363
/**
364
* Linux epoll implementation (higher performance on Linux)
365
*/
366
EPOLL;
367
368
/**
369
* Parse IOMode from string representation
370
* @param value - String value ("NIO" or "EPOLL")
371
* @return IOMode corresponding to the string
372
* @throws IllegalArgumentException if value is not recognized
373
*/
374
public static IOMode parse(String value);
375
}
376
```
377
378
## Usage Examples
379
380
### Basic Configuration Setup
381
382
```java
383
import org.apache.spark.network.util.*;
384
import java.util.HashMap;
385
import java.util.Map;
386
387
// Create configuration properties
388
Map<String, String> properties = new HashMap<>();
389
properties.put("spark.network.timeout", "120s");
390
properties.put("spark.network.io.mode", "NIO");
391
properties.put("spark.network.io.preferDirectBufs", "true");
392
properties.put("spark.network.io.connectionTimeout", "30s");
393
properties.put("spark.network.io.numConnectionsPerPeer", "1");
394
properties.put("spark.network.io.serverThreads", "0"); // 0 means use default
395
properties.put("spark.network.io.clientThreads", "0");
396
397
// Create config provider
398
ConfigProvider configProvider = new MapConfigProvider(properties);
399
400
// Create transport configuration
401
TransportConf conf = new TransportConf("spark", configProvider);
402
403
// Access configuration values
404
System.out.println("I/O Mode: " + conf.ioMode());
405
System.out.println("Prefer Direct Buffers: " + conf.preferDirectBufs());
406
System.out.println("Connection Timeout: " + conf.connectionTimeoutMs() + "ms");
407
System.out.println("Connections Per Peer: " + conf.numConnectionsPerPeer());
408
System.out.println("Server Threads: " + conf.serverThreads());
409
System.out.println("Client Threads: " + conf.clientThreads());
410
```
411
412
### Security Configuration
413
414
```java
415
// Configure transport security
416
Map<String, String> securityProperties = new HashMap<>();
417
securityProperties.put("spark.network.crypto.enabled", "true");
418
securityProperties.put("spark.network.crypto.keyLength", "256");
419
securityProperties.put("spark.network.crypto.cipherTransformation", "AES/GCM/NoPadding");
420
securityProperties.put("spark.network.sasl.encryption", "true");
421
securityProperties.put("spark.network.sasl.timeout", "30s");
422
securityProperties.put("spark.authenticate", "true");
423
424
ConfigProvider securityProvider = new MapConfigProvider(securityProperties);
425
TransportConf securityConf = new TransportConf("secure-spark", securityProvider);
426
427
// Check security settings
428
System.out.println("Encryption Enabled: " + securityConf.encryptionEnabled());
429
System.out.println("SASL Encryption: " + securityConf.saslEncryption());
430
System.out.println("Key Length: " + securityConf.encryptionKeyLength() + " bits");
431
System.out.println("Cipher: " + securityConf.cipherTransformation());
432
System.out.println("Authentication Required: " + securityConf.authenticationEnabled());
433
System.out.println("SASL Timeout: " + securityConf.saslTimeoutMs() + "ms");
434
```
435
436
### Performance Tuning Configuration
437
438
```java
439
// Configure for high-performance scenarios
440
Map<String, String> performanceProperties = new HashMap<>();
441
performanceProperties.put("spark.network.io.mode", "EPOLL"); // Linux only
442
performanceProperties.put("spark.network.io.preferDirectBufs", "true");
443
performanceProperties.put("spark.network.io.serverThreads", "8");
444
performanceProperties.put("spark.network.io.clientThreads", "8");
445
performanceProperties.put("spark.network.io.numConnectionsPerPeer", "2");
446
performanceProperties.put("spark.network.io.receiveBuf", "65536"); // 64KB
447
performanceProperties.put("spark.network.io.sendBuf", "65536"); // 64KB
448
performanceProperties.put("spark.network.maxMessageSize", "128MB");
449
performanceProperties.put("spark.network.zeroCopy", "true");
450
451
ConfigProvider perfProvider = new MapConfigProvider(performanceProperties);
452
TransportConf perfConf = new TransportConf("high-perf", perfProvider);
453
454
// Validate performance settings
455
System.out.println("Performance Configuration:");
456
System.out.println(" I/O Mode: " + perfConf.ioMode());
457
System.out.println(" Direct Buffers: " + perfConf.preferDirectBufs());
458
System.out.println(" Server Threads: " + perfConf.serverThreads());
459
System.out.println(" Client Threads: " + perfConf.clientThreads());
460
System.out.println(" Connections Per Peer: " + perfConf.numConnectionsPerPeer());
461
System.out.println(" Receive Buffer: " + perfConf.receiveBuf() + " bytes");
462
System.out.println(" Send Buffer: " + perfConf.sendBuf() + " bytes");
463
System.out.println(" Max Message Size: " + perfConf.maxMessageSize() + " bytes");
464
System.out.println(" Zero Copy: " + perfConf.zeroCopyStreaming());
465
```
466
467
### Memory Management Configuration
468
469
```java
470
// Configure memory settings for large-scale operations
471
Map<String, String> memoryProperties = new HashMap<>();
472
memoryProperties.put("spark.network.memory.fraction", "0.8");
473
memoryProperties.put("spark.network.memory.offHeap.enabled", "true");
474
memoryProperties.put("spark.network.memory.memoryMapThreshold", "2MB");
475
memoryProperties.put("spark.shuffle.file.buffer", "1MB");
476
memoryProperties.put("spark.network.maxInMemoryShuffleBlockSize", "64MB");
477
478
ConfigProvider memoryProvider = new MapConfigProvider(memoryProperties);
479
TransportConf memoryConf = new TransportConf("memory-optimized", memoryProvider);
480
481
System.out.println("Memory Configuration:");
482
System.out.println(" Memory Fraction: " + memoryConf.memoryFraction());
483
System.out.println(" Off-Heap Enabled: " + memoryConf.offHeapEnabled());
484
System.out.println(" Memory Map Threshold: " + memoryConf.memoryMapThreshold() + " bytes");
485
System.out.println(" Max In-Memory Block Size: " + memoryConf.maxInMemoryShuffleBlockSize() + " bytes");
486
```
487
488
### Dynamic Configuration Updates
489
490
```java
491
// Create mutable configuration that can be updated at runtime
492
MapConfigProvider dynamicProvider = new MapConfigProvider(new HashMap<>());
493
494
// Initial configuration
495
dynamicProvider.set("spark.network.timeout", "60s");
496
dynamicProvider.set("spark.network.io.connectionTimeout", "15s");
497
498
TransportConf dynamicConf = new TransportConf("dynamic", dynamicProvider);
499
System.out.println("Initial timeout: " + dynamicConf.connectionTimeoutMs() + "ms");
500
501
// Update configuration at runtime
502
dynamicProvider.set("spark.network.io.connectionTimeout", "30s");
503
System.out.println("Updated timeout: " + dynamicConf.connectionTimeoutMs() + "ms");
504
505
// Add new configuration
506
dynamicProvider.set("spark.network.io.numConnectionsPerPeer", "3");
507
System.out.println("Connections per peer: " + dynamicConf.numConnectionsPerPeer());
508
509
// Remove configuration (falls back to default)
510
dynamicProvider.remove("spark.network.io.numConnectionsPerPeer");
511
System.out.println("Default connections per peer: " + dynamicConf.numConnectionsPerPeer());
512
```
513
514
### Custom Configuration Provider
515
516
```java
517
// Custom configuration provider that loads from multiple sources
518
public class HierarchicalConfigProvider extends ConfigProvider {
519
private final List<ConfigProvider> providers;
520
521
public HierarchicalConfigProvider(ConfigProvider... providers) {
522
this.providers = Arrays.asList(providers);
523
}
524
525
@Override
526
public String get(String name) {
527
// Check providers in order, return first non-null value
528
for (ConfigProvider provider : providers) {
529
String value = provider.get(name);
530
if (value != null) {
531
return value;
532
}
533
}
534
return null;
535
}
536
537
@Override
538
public Map<String, String> getAll() {
539
Map<String, String> result = new HashMap<>();
540
// Merge all providers, with later providers overriding earlier ones
541
for (ConfigProvider provider : providers) {
542
result.putAll(provider.getAll());
543
}
544
return result;
545
}
546
}
547
548
// Usage: system properties override environment, which overrides defaults
549
Map<String, String> defaults = new HashMap<>();
550
defaults.put("spark.network.timeout", "120s");
551
defaults.put("spark.network.io.mode", "NIO");
552
553
Map<String, String> environment = new HashMap<>();
554
environment.put("spark.network.timeout", "180s"); // Override default
555
556
Map<String, String> systemProps = new HashMap<>();
557
systemProps.put("spark.network.io.mode", "EPOLL"); // Override environment
558
559
HierarchicalConfigProvider hierarchical = new HierarchicalConfigProvider(
560
new MapConfigProvider(defaults),
561
new MapConfigProvider(environment),
562
new MapConfigProvider(systemProps)
563
);
564
565
TransportConf hierarchicalConf = new TransportConf("hierarchical", hierarchical);
566
System.out.println("Final timeout: " + hierarchicalConf.connectionTimeoutMs() + "ms"); // 180s from environment
567
System.out.println("Final I/O mode: " + hierarchicalConf.ioMode()); // EPOLL from system props
568
```
569
570
### Configuration Validation and Debugging
571
572
```java
573
// Utility methods for configuration validation and debugging
574
public class ConfigValidator {
575
576
public static void validateConfiguration(TransportConf conf) {
577
System.out.println("=== Transport Configuration Validation ===");
578
579
// Validate I/O settings
580
if (conf.ioMode().equals("EPOLL") && !isLinux()) {
581
System.out.println("WARNING: EPOLL mode is only supported on Linux");
582
}
583
584
// Validate thread counts
585
if (conf.serverThreads() < 0) {
586
System.out.println("WARNING: Invalid server thread count: " + conf.serverThreads());
587
}
588
589
// Validate timeouts
590
if (conf.connectionTimeoutMs() <= 0) {
591
System.out.println("WARNING: Invalid connection timeout: " + conf.connectionTimeoutMs());
592
}
593
594
// Validate security settings
595
if (conf.encryptionEnabled() && conf.encryptionKeyLength() < 128) {
596
System.out.println("WARNING: Weak encryption key length: " + conf.encryptionKeyLength());
597
}
598
599
System.out.println("Configuration validation completed");
600
}
601
602
public static void printConfiguration(TransportConf conf) {
603
System.out.println("=== Transport Configuration Summary ===");
604
System.out.println("Module: " + conf.getModule());
605
606
System.out.println("\nI/O Settings:");
607
System.out.println(" Mode: " + conf.ioMode());
608
System.out.println(" Prefer Direct Buffers: " + conf.preferDirectBufs());
609
System.out.println(" Receive Buffer: " + conf.receiveBuf() + " bytes");
610
System.out.println(" Send Buffer: " + conf.sendBuf() + " bytes");
611
612
System.out.println("\nConnection Settings:");
613
System.out.println(" Timeout: " + conf.connectionTimeoutMs() + "ms");
614
System.out.println(" Creation Timeout: " + conf.connectionCreationTimeoutMs() + "ms");
615
System.out.println(" Connections Per Peer: " + conf.numConnectionsPerPeer());
616
System.out.println(" Max Retries: " + conf.maxRetries());
617
618
System.out.println("\nThread Settings:");
619
System.out.println(" Server Threads: " + conf.serverThreads());
620
System.out.println(" Client Threads: " + conf.clientThreads());
621
622
System.out.println("\nSecurity Settings:");
623
System.out.println(" Encryption Enabled: " + conf.encryptionEnabled());
624
System.out.println(" SASL Encryption: " + conf.saslEncryption());
625
System.out.println(" Authentication Required: " + conf.authenticationEnabled());
626
627
if (conf.encryptionEnabled()) {
628
System.out.println(" Key Length: " + conf.encryptionKeyLength() + " bits");
629
System.out.println(" Cipher: " + conf.cipherTransformation());
630
}
631
632
System.out.println("\nMemory Settings:");
633
System.out.println(" Off-Heap Enabled: " + conf.offHeapEnabled());
634
System.out.println(" Memory Fraction: " + conf.memoryFraction());
635
System.out.println(" Memory Map Threshold: " + conf.memoryMapThreshold() + " bytes");
636
637
System.out.println("=== End Configuration Summary ===");
638
}
639
640
private static boolean isLinux() {
641
return System.getProperty("os.name").toLowerCase().contains("linux");
642
}
643
}
644
645
// Usage
646
TransportConf conf = new TransportConf("validation-test", configProvider);
647
ConfigValidator.validateConfiguration(conf);
648
ConfigValidator.printConfiguration(conf);
649
```
650
651
### Configuration Profiles
652
653
```java
654
// Predefined configuration profiles for common scenarios
655
public class ConfigProfiles {
656
657
public static TransportConf createDevelopmentConfig() {
658
Map<String, String> devProps = new HashMap<>();
659
devProps.put("spark.network.timeout", "30s");
660
devProps.put("spark.network.io.connectionTimeout", "10s");
661
devProps.put("spark.network.io.numConnectionsPerPeer", "1");
662
devProps.put("spark.network.io.serverThreads", "2");
663
devProps.put("spark.network.io.clientThreads", "2");
664
devProps.put("spark.authenticate", "false");
665
devProps.put("spark.network.crypto.enabled", "false");
666
667
return new TransportConf("development", new MapConfigProvider(devProps));
668
}
669
670
public static TransportConf createProductionConfig() {
671
Map<String, String> prodProps = new HashMap<>();
672
prodProps.put("spark.network.timeout", "300s");
673
prodProps.put("spark.network.io.connectionTimeout", "60s");
674
prodProps.put("spark.network.io.numConnectionsPerPeer", "2");
675
prodProps.put("spark.network.io.serverThreads", "0"); // Use defaults
676
prodProps.put("spark.network.io.clientThreads", "0");
677
prodProps.put("spark.authenticate", "true");
678
prodProps.put("spark.network.crypto.enabled", "true");
679
prodProps.put("spark.network.crypto.keyLength", "256");
680
prodProps.put("spark.network.sasl.encryption", "true");
681
682
return new TransportConf("production", new MapConfigProvider(prodProps));
683
}
684
685
public static TransportConf createHighThroughputConfig() {
686
Map<String, String> htProps = new HashMap<>();
687
htProps.put("spark.network.io.mode", "EPOLL");
688
htProps.put("spark.network.io.preferDirectBufs", "true");
689
htProps.put("spark.network.io.numConnectionsPerPeer", "4");
690
htProps.put("spark.network.io.receiveBuf", "131072"); // 128KB
691
htProps.put("spark.network.io.sendBuf", "131072"); // 128KB
692
htProps.put("spark.network.maxMessageSize", "268435456"); // 256MB
693
htProps.put("spark.network.zeroCopy", "true");
694
htProps.put("spark.network.memory.offHeap.enabled", "true");
695
696
return new TransportConf("high-throughput", new MapConfigProvider(htProps));
697
}
698
}
699
700
// Usage
701
TransportConf devConf = ConfigProfiles.createDevelopmentConfig();
702
TransportConf prodConf = ConfigProfiles.createProductionConfig();
703
TransportConf htConf = ConfigProfiles.createHighThroughputConfig();
704
705
System.out.println("Development config - encryption: " + devConf.encryptionEnabled());
706
System.out.println("Production config - encryption: " + prodConf.encryptionEnabled());
707
System.out.println("High throughput config - I/O mode: " + htConf.ioMode());
708
```
709
710
## Best Practices
711
712
### Configuration Organization
713
714
1. **Module Naming**: Use descriptive module names to distinguish different transport contexts
715
2. **Property Grouping**: Group related properties using consistent naming conventions
716
3. **Environment Separation**: Use different configurations for development, testing, and production
717
4. **Default Values**: Provide sensible defaults for all configuration properties
718
719
### Performance Considerations
720
721
1. **I/O Mode Selection**: Use EPOLL on Linux for better performance
722
2. **Thread Pool Sizing**: Set thread counts based on workload characteristics
723
3. **Buffer Sizing**: Tune buffer sizes based on typical message sizes
724
4. **Connection Pooling**: Configure appropriate connection counts per peer
725
726
### Security Best Practices
727
728
1. **Enable Authentication**: Always enable authentication in production environments
729
2. **Use Strong Encryption**: Use AES-256 with GCM mode for authenticated encryption
730
3. **Timeout Configuration**: Set appropriate timeouts to prevent resource exhaustion
731
4. **Key Management**: Use secure key distribution mechanisms