0
# Interfaces Cache System
1
2
Performance optimization system for pyFFTW interfaces that caches FFTW objects to avoid repeated planning overhead. When enabled, this cache stores temporarily created FFTW objects from interface function calls, providing significant performance improvements for repeated transforms with similar parameters.
3
4
## Core Imports
5
6
```python
7
from pyfftw.interfaces import cache
8
```
9
10
## Basic Usage
11
12
```python
13
from pyfftw.interfaces import cache, numpy_fft
14
import numpy as np
15
16
# Enable caching for better performance with repeated transforms
17
cache.enable()
18
19
# Create sample data
20
x = np.random.randn(1024) + 1j * np.random.randn(1024)
21
22
# First call - creates and caches FFTW object
23
y1 = numpy_fft.fft(x) # Slower - initial planning
24
25
# Second call with equivalent array - uses cached object
26
y2 = numpy_fft.fft(x) # Much faster - from cache
27
28
# Configure cache behavior
29
cache.set_keepalive_time(1.0) # Keep objects alive for 1 second
30
31
# Check cache status
32
if cache.is_enabled():
33
print("Cache is active")
34
35
# Disable when done
36
cache.disable()
37
```
38
39
## Performance Benefits
40
41
The cache system addresses the overhead of creating FFTW objects in interface functions:
42
43
- **Without cache**: Each interface call creates a new FFTW object, including planning time
44
- **With cache**: Equivalent transforms reuse cached objects, eliminating planning overhead
45
- **Best suited for**: Repeated transforms with similar array properties and parameters
46
47
**Note**: For very small transforms, cache lookup overhead may exceed transform time. In such cases, consider using the FFTW class directly.
48
49
## Capabilities
50
51
### Cache Control
52
53
Functions to enable, disable, and configure the caching system.
54
55
```python { .api }
56
def enable():
57
"""
58
Enable the interface cache system.
59
60
Enables caching of FFTW objects created during interface function calls.
61
Spawns a background thread to manage cached objects and their lifetimes.
62
63
Raises:
64
ImportError: If threading is not available on the system
65
"""
66
67
def disable():
68
"""
69
Disable the interface cache system.
70
71
Disables caching and removes all cached FFTW objects, freeing associated memory.
72
The background cache management thread is terminated.
73
"""
74
75
def is_enabled():
76
"""
77
Check whether the cache is currently enabled.
78
79
Returns:
80
bool: True if cache is enabled, False otherwise
81
"""
82
83
def set_keepalive_time(keepalive_time):
84
"""
85
Set the minimum time cached objects are kept alive.
86
87
Parameters:
88
- keepalive_time: float, minimum time in seconds to keep cached objects alive
89
90
Notes:
91
- Default keepalive time is 0.1 seconds
92
- Objects are removed after being unused for this duration
93
- Actual removal time may be longer due to thread scheduling
94
- Using a cached object resets its timer
95
"""
96
```
97
98
## Usage Examples
99
100
### Basic Cache Usage
101
102
```python
103
from pyfftw.interfaces import cache, numpy_fft
104
import numpy as np
105
106
# Enable caching
107
cache.enable()
108
109
# Create test data
110
data = np.random.randn(1024, 512) + 1j * np.random.randn(1024, 512)
111
112
# First transform - creates and caches FFTW object
113
result1 = numpy_fft.fft2(data)
114
115
# Equivalent transforms reuse cached object
116
result2 = numpy_fft.fft2(data) # Fast - from cache
117
result3 = numpy_fft.fft2(data * 2) # Still uses cache (same array properties)
118
119
# Different array properties require new FFTW object
120
different_data = np.random.randn(512, 512) + 1j * np.random.randn(512, 512)
121
result4 = numpy_fft.fft2(different_data) # Creates new cached object
122
123
cache.disable()
124
```
125
126
### Cache Configuration
127
128
```python
129
from pyfftw.interfaces import cache, scipy_fft
130
import numpy as np
131
import time
132
133
# Configure cache before enabling
134
cache.enable()
135
cache.set_keepalive_time(2.0) # Keep objects alive for 2 seconds
136
137
data = np.random.randn(256)
138
139
# Use interface functions
140
fft_result = scipy_fft.fft(data)
141
142
# Wait and check if cache is still active
143
time.sleep(1.5)
144
if cache.is_enabled():
145
# Object should still be in cache
146
fft_result2 = scipy_fft.fft(data) # Fast
147
148
time.sleep(1.0) # Total 2.5 seconds - object should be removed
149
150
# This will create a new object
151
fft_result3 = scipy_fft.fft(data) # Slower - new planning
152
153
cache.disable()
154
```
155
156
### Performance Comparison
157
158
```python
159
from pyfftw.interfaces import cache, numpy_fft
160
import numpy as np
161
import time
162
163
data = np.random.randn(2048) + 1j * np.random.randn(2048)
164
165
# Without cache
166
start_time = time.time()
167
for i in range(10):
168
result = numpy_fft.fft(data)
169
no_cache_time = time.time() - start_time
170
171
# With cache
172
cache.enable()
173
start_time = time.time()
174
for i in range(10):
175
result = numpy_fft.fft(data)
176
cache_time = time.time() - start_time
177
cache.disable()
178
179
print(f"Without cache: {no_cache_time:.3f}s")
180
print(f"With cache: {cache_time:.3f}s")
181
print(f"Speedup: {no_cache_time / cache_time:.1f}x")
182
```
183
184
### Context Manager Pattern
185
186
```python
187
from pyfftw.interfaces import cache, numpy_fft
188
import numpy as np
189
from contextlib import contextmanager
190
191
@contextmanager
192
def fftw_cache(keepalive_time=0.1):
193
"""Context manager for FFTW cache usage."""
194
try:
195
cache.enable()
196
cache.set_keepalive_time(keepalive_time)
197
yield
198
finally:
199
cache.disable()
200
201
# Use with context manager
202
with fftw_cache(keepalive_time=1.0):
203
data = np.random.randn(1024)
204
205
# All transforms in this block use caching
206
fft1 = numpy_fft.fft(data)
207
fft2 = numpy_fft.fft(data) # Fast - from cache
208
209
# Different sizes create separate cache entries
210
data2 = np.random.randn(2048)
211
fft3 = numpy_fft.fft(data2) # New cache entry
212
fft4 = numpy_fft.fft(data2) # Fast - from cache
213
214
# Cache automatically disabled when leaving context
215
```
216
217
## Cache Behavior
218
219
### Object Equivalency
220
221
The cache uses conservative equivalency checking. Objects are considered equivalent when:
222
223
- Array shape, strides, and dtype are identical
224
- Transform parameters (axis, n, norm, etc.) are identical
225
- Interface function and all arguments match exactly
226
227
### Memory Management
228
229
- Cached objects are stored with weak references
230
- Objects are automatically removed after the keepalive time
231
- Background thread manages object lifetimes
232
- Calling `disable()` immediately frees all cached objects
233
234
### Threading Considerations
235
236
- Cache uses a background thread for object management
237
- Thread-safe for concurrent access from multiple threads
238
- Requires threading support (raises ImportError if unavailable)
239
- Each thread can enable/disable cache independently
240
241
### Performance Considerations
242
243
- Most beneficial for medium to large transforms (≥ 256 elements)
244
- Small transforms may be slower due to cache overhead
245
- Best when array properties remain consistent across calls
246
- Memory usage scales with number of unique cached transforms