0
# Connection Pooling
1
2
Thread-safe connection pooling implementations for high-concurrency applications. pylibmc provides two pooling strategies: queue-based pooling for shared access and thread-mapped pooling for per-thread client isolation.
3
4
## Capabilities
5
6
### ClientPool
7
8
Queue-based client pool that maintains a pool of client connections for thread-safe sharing. Uses a queue to manage client instances and provides context manager support for automatic resource management.
9
10
```python { .api }
11
class ClientPool:
12
"""
13
Queue-based client pool for thread-safe memcached access.
14
Inherits from queue.Queue for thread-safe client management.
15
"""
16
17
def __init__(mc=None, n_slots: int = 0):
18
"""
19
Initialize client pool.
20
21
Parameters:
22
- mc: Master client to clone for pool (optional)
23
- n_slots (int): Number of client slots in pool
24
"""
25
26
def reserve(block: bool = False):
27
"""
28
Context manager for reserving a client from the pool.
29
30
Parameters:
31
- block (bool): Whether to block if pool is empty
32
33
Returns:
34
Context manager yielding a client instance
35
36
Raises:
37
- queue.Empty: If block=False and no clients available
38
"""
39
40
def fill(mc, n_slots: int):
41
"""
42
Fill the pool with cloned clients.
43
44
Parameters:
45
- mc: Master client to clone
46
- n_slots (int): Number of client instances to create
47
"""
48
49
def put(client):
50
"""
51
Return a client to the pool.
52
53
Parameters:
54
- client: Client instance to return
55
"""
56
57
def get(block: bool = True):
58
"""
59
Get a client from the pool.
60
61
Parameters:
62
- block (bool): Whether to block if pool is empty
63
64
Returns:
65
Client instance from pool
66
67
Raises:
68
- queue.Empty: If block=False and no clients available
69
"""
70
```
71
72
### ThreadMappedPool
73
74
Thread-mapped client pool that maintains one client per thread. Automatically creates clients for new threads and provides thread-local client access without explicit queuing.
75
76
```python { .api }
77
class ThreadMappedPool(dict):
78
"""
79
Thread-mapped client pool with per-thread client instances.
80
Inherits from dict for thread-to-client mapping.
81
"""
82
83
def __init__(master):
84
"""
85
Initialize thread-mapped pool.
86
87
Parameters:
88
- master: Master client to clone for each thread
89
"""
90
91
def reserve():
92
"""
93
Context manager for reserving the current thread's client.
94
Creates a new client for the thread if none exists.
95
96
Returns:
97
Context manager yielding thread-local client instance
98
"""
99
100
def relinquish():
101
"""
102
Release the current thread's client from the pool.
103
Should be called before thread exit to prevent memory leaks.
104
105
Returns:
106
The client instance that was released, or None
107
"""
108
109
@property
110
def current_key():
111
"""
112
Get the current thread's identifier key.
113
114
Returns:
115
Thread identifier used as pool key
116
"""
117
```
118
119
### Pool Management Functions
120
121
Utility functions for pool administration and monitoring.
122
123
```python { .api }
124
def clone():
125
"""
126
Create a copy of the client with same configuration.
127
Used internally by pools to create client instances.
128
129
Returns:
130
New client instance with identical configuration
131
"""
132
```
133
134
## Usage Examples
135
136
### ClientPool Usage
137
138
```python
139
import pylibmc
140
from queue import Empty
141
142
# Create master client
143
master_client = pylibmc.Client(["localhost:11211"], binary=True)
144
master_client.behaviors = {"tcp_nodelay": True, "ketama": True}
145
146
# Create and fill pool
147
pool = pylibmc.ClientPool()
148
pool.fill(master_client, 10) # 10 client instances in pool
149
150
# Use pool in application
151
def handle_request():
152
try:
153
with pool.reserve(block=False) as client:
154
client.set("request:123", "data")
155
return client.get("request:123")
156
except Empty:
157
print("No clients available, consider increasing pool size")
158
return None
159
160
# Alternative: manual client management
161
client = pool.get(block=True) # Get client, blocking if needed
162
try:
163
client.set("manual", "value")
164
result = client.get("manual")
165
finally:
166
pool.put(client) # Always return client to pool
167
```
168
169
### ThreadMappedPool Usage
170
171
```python
172
import pylibmc
173
import threading
174
175
# Create master client
176
master_client = pylibmc.Client(["localhost:11211"], binary=True)
177
master_client.behaviors = {"tcp_nodelay": True}
178
179
# Create thread-mapped pool
180
pool = pylibmc.ThreadMappedPool(master_client)
181
182
def worker_thread(thread_id):
183
"""Worker function that uses thread-local client."""
184
try:
185
# Each thread gets its own client automatically
186
with pool.reserve() as client:
187
client.set(f"thread:{thread_id}", f"data from {thread_id}")
188
result = client.get(f"thread:{thread_id}")
189
print(f"Thread {thread_id}: {result}")
190
finally:
191
# Clean up when thread exits
192
pool.relinquish()
193
194
# Start multiple threads
195
threads = []
196
for i in range(5):
197
t = threading.Thread(target=worker_thread, args=(i,))
198
threads.append(t)
199
t.start()
200
201
# Wait for threads to complete
202
for t in threads:
203
t.join()
204
```
205
206
### Pool Comparison
207
208
```python
209
import pylibmc
210
import threading
211
import time
212
213
master_client = pylibmc.Client(["localhost:11211"])
214
215
# ClientPool: Good for limiting total connections
216
client_pool = pylibmc.ClientPool()
217
client_pool.fill(master_client, 5) # Max 5 concurrent connections
218
219
# ThreadMappedPool: Good for per-thread isolation
220
thread_pool = pylibmc.ThreadMappedPool(master_client)
221
222
def benchmark_client_pool():
223
"""Shared pool with potential blocking."""
224
with client_pool.reserve() as client:
225
client.set("test", "value")
226
time.sleep(0.1) # Simulate work
227
return client.get("test")
228
229
def benchmark_thread_pool():
230
"""Per-thread client, no blocking."""
231
with thread_pool.reserve() as client:
232
client.set("test", "value")
233
time.sleep(0.1) # Simulate work
234
return client.get("test")
235
236
# ClientPool may block if all 5 clients are busy
237
# ThreadMappedPool creates new client per thread (no limit)
238
```
239
240
### Advanced Pool Configuration
241
242
```python
243
import pylibmc
244
from queue import Queue
245
246
# Custom pool size based on workload
247
master_client = pylibmc.Client(["localhost:11211"], binary=True)
248
master_client.behaviors = {
249
"tcp_nodelay": True,
250
"ketama": True,
251
"no_block": True, # Non-blocking I/O
252
"connect_timeout": 5000, # 5 second timeout
253
"retry_timeout": 30 # 30 second retry timeout
254
}
255
256
# Size pool based on expected concurrent requests
257
expected_concurrency = 20
258
pool = pylibmc.ClientPool(master_client, expected_concurrency)
259
260
# Monitor pool usage
261
def get_pool_stats():
262
"""Get current pool utilization."""
263
return {
264
"pool_size": pool.qsize(),
265
"clients_available": not pool.empty(),
266
"pool_full": pool.full()
267
}
268
269
# Use pool with error handling
270
def safe_cache_operation(key, value=None):
271
"""Perform cache operation with proper error handling."""
272
try:
273
with pool.reserve(block=False) as client:
274
if value is not None:
275
return client.set(key, value)
276
else:
277
return client.get(key)
278
except Queue.Empty:
279
print("Pool exhausted - consider increasing pool size")
280
return None
281
except pylibmc.Error as e:
282
print(f"Cache operation failed: {e}")
283
return None
284
```
285
286
## Pool Selection Guidelines
287
288
### Use ClientPool When:
289
- You want to limit total number of connections to memcached
290
- Memory usage is a concern (fixed number of clients)
291
- You have predictable, moderate concurrency levels
292
- You can tolerate occasional blocking when pool is exhausted
293
294
### Use ThreadMappedPool When:
295
- You have high, unpredictable concurrency
296
- Each thread does significant work with the client
297
- You want to avoid blocking entirely
298
- Memory usage for additional clients is acceptable
299
- Thread lifetime is predictable for proper cleanup
300
301
### Performance Considerations:
302
- **ClientPool**: Lower memory usage, potential blocking, queue overhead
303
- **ThreadMappedPool**: Higher memory usage, no blocking, thread-local access
304
- Both pools use client cloning which preserves all behaviors and settings
305
- Pool overhead is minimal compared to memcached network operations