0
# Caching
1
2
Multiple cache implementations with different eviction strategies, function/method decorators, and cache key generation utilities. Supports LRU (Least Recently Used), LRI (Least Recently Inserted) strategies with hit/miss statistics, property caching, and advanced function caching with scoping and type-aware keys.
3
4
## Capabilities
5
6
### Cache Data Structures
7
8
High-performance cache implementations with different eviction strategies.
9
10
```python { .api }
11
class LRI(dict):
12
"""
13
Least Recently Inserted cache strategy.
14
Dict subtype with hit/miss statistics and size limits.
15
"""
16
def __init__(self, max_size=DEFAULT_MAX_SIZE): ...
17
@property
18
def hit_count(self): ...
19
@property
20
def miss_count(self): ...
21
@property
22
def soft_miss_count(self): ...
23
def get_stats(self): ...
24
25
class LRU(LRI):
26
"""
27
Least Recently Used cache strategy.
28
Extends LRI with access-based ordering for optimal cache performance.
29
"""
30
def __getitem__(self, key): ...
31
def get(self, key, default=None): ...
32
33
class CachedFunction:
34
"""Wrapper for functions with caching logic."""
35
def __init__(self, cache, func, scoped, typed, key_func): ...
36
def __call__(self, *args, **kwargs): ...
37
def cache_clear(self): ...
38
def cache_info(self): ...
39
40
class CachedMethod:
41
"""Wrapper for methods with caching logic."""
42
def __init__(self, cache, func, scoped, typed, key_func): ...
43
def __get__(self, obj, objtype): ...
44
45
class ThresholdCounter:
46
"""
47
Bounded dict-like mapping with automatic compaction based on thresholds.
48
"""
49
def __init__(self, threshold=DEFAULT_MAX_SIZE, **kwargs): ...
50
def __setitem__(self, key, value): ...
51
def __getitem__(self, key): ...
52
def threshold_check(self): ...
53
54
class MinIDMap:
55
"""
56
Assigns weakref-able objects the smallest possible unique integer IDs.
57
"""
58
def __init__(self): ...
59
def get(self, obj): ...
60
def drop(self, obj): ...
61
```
62
63
### Function and Method Caching
64
65
Decorators for caching function and method results.
66
67
```python { .api }
68
def cached(cache, scoped=True, typed=False, key=None):
69
"""
70
Decorator to cache any function.
71
72
Parameters:
73
- cache: Cache object to store results
74
- scoped (bool): Include function identity in cache key
75
- typed (bool): Include argument types in cache key
76
- key (callable, optional): Custom key generation function
77
78
Returns:
79
callable: Cached version of the function
80
"""
81
82
def cachedmethod(cache, scoped=True, typed=False, key=None):
83
"""
84
Decorator to cache methods.
85
86
Parameters:
87
- cache: Cache object or attribute name on instance
88
- scoped (bool): Include method identity in cache key
89
- typed (bool): Include argument types in cache key
90
- key (callable, optional): Custom key generation function
91
92
Returns:
93
callable: Cached version of the method
94
"""
95
```
96
97
### Property Caching
98
99
Property descriptor that caches computed values.
100
101
```python { .api }
102
class cachedproperty:
103
"""
104
Property decorator that caches the result after first access.
105
106
Like @property, but caches the result after first computation,
107
making subsequent accesses fast for expensive computations.
108
"""
109
def __init__(self, func): ...
110
def __get__(self, obj, objtype=None): ...
111
def __set__(self, obj, value): ...
112
def __delete__(self, obj): ...
113
```
114
115
### Cache Key Generation
116
117
Utilities for generating cache keys from function arguments.
118
119
```python { .api }
120
def make_cache_key(args, kwargs, typed=False, **kwargs_extra):
121
"""
122
Generate cache keys from function arguments.
123
124
Parameters:
125
- args (tuple): Positional arguments
126
- kwargs (dict): Keyword arguments
127
- typed (bool): Include argument types in key
128
129
Returns:
130
Hashable cache key
131
"""
132
```
133
134
## Usage Examples
135
136
```python
137
from boltons.cacheutils import LRU, cached, cachedmethod, cachedproperty
138
139
# LRU Cache for direct use
140
cache = LRU(max_size=128)
141
cache['expensive_key'] = expensive_computation()
142
result = cache.get('expensive_key', 'default')
143
144
# Check cache statistics
145
stats = cache.get_stats()
146
print(f"Hits: {stats.hit_count}, Misses: {stats.miss_count}")
147
148
# Function caching decorator
149
@cached(LRU(max_size=256))
150
def fibonacci(n):
151
if n < 2:
152
return n
153
return fibonacci(n-1) + fibonacci(n-2)
154
155
# Method caching with instance-level cache
156
class DataProcessor:
157
def __init__(self):
158
self.cache = LRU(max_size=100)
159
160
@cachedmethod('cache')
161
def process_data(self, data_id):
162
# Expensive data processing
163
return expensive_data_processing(data_id)
164
165
# Property caching for expensive computations
166
class ExpensiveCalculation:
167
@cachedproperty
168
def expensive_result(self):
169
# This computation only happens once
170
return sum(range(1000000))
171
172
# Custom cache key generation
173
@cached(LRU(), typed=True) # Include types in cache key
174
def typed_function(x, y):
175
return x + y
176
177
# Clear cache when needed
178
fibonacci.cache_clear()
179
processor = DataProcessor()
180
processor.process_data.cache_clear()
181
```
182
183
### Advanced Caching Patterns
184
185
```python
186
from boltons.cacheutils import ThresholdCounter, MinIDMap
187
188
# ThresholdCounter for automatic cleanup
189
counter = ThresholdCounter(threshold=1000)
190
for i in range(1500):
191
counter[f'key_{i}'] = i
192
# Automatically compacts when threshold exceeded
193
194
# MinIDMap for object ID tracking
195
id_map = MinIDMap()
196
obj1 = SomeObject()
197
obj2 = SomeObject()
198
id1 = id_map.get(obj1) # Returns smallest available ID
199
id2 = id_map.get(obj2) # Returns next smallest ID
200
201
# IDs are reused when objects are dropped
202
del obj1
203
id_map.drop(obj1)
204
obj3 = SomeObject()
205
id3 = id_map.get(obj3) # Reuses id1
206
```
207
208
### Custom Cache Implementation
209
210
```python
211
from boltons.cacheutils import cached, make_cache_key
212
213
# Custom cache with logging
214
class LoggingCache(dict):
215
def __init__(self, max_size=128):
216
super().__init__()
217
self.max_size = max_size
218
self.access_count = 0
219
220
def __getitem__(self, key):
221
self.access_count += 1
222
print(f"Cache access #{self.access_count}: {key}")
223
return super().__getitem__(key)
224
225
def __setitem__(self, key, value):
226
if len(self) >= self.max_size:
227
# Simple eviction: remove first item
228
del self[next(iter(self))]
229
super().__setitem__(key, value)
230
231
# Use custom cache with cached decorator
232
@cached(LoggingCache(max_size=50))
233
def tracked_function(x):
234
return x ** 2
235
236
# Custom key function for complex scenarios
237
def custom_key_func(args, kwargs):
238
# Only cache based on first argument
239
return args[0] if args else None
240
241
@cached(LRU(), key=custom_key_func)
242
def selective_cache(important_arg, ignored_arg):
243
return expensive_operation(important_arg, ignored_arg)
244
```
245
246
## Types
247
248
```python { .api }
249
# Constants
250
DEFAULT_MAX_SIZE = 128
251
252
# Cache statistics namedtuple (returned by get_stats())
253
CacheStats = namedtuple('CacheStats', ['hit_count', 'miss_count', 'soft_miss_count', 'size'])
254
```