0
# Transform Composition
1
2
Tools for combining and organizing transforms into pipelines, including sequential composition, random selection from transform groups, and custom lambda transforms. These utilities enable flexible and powerful data processing pipelines.
3
4
## Capabilities
5
6
### Sequential Composition
7
8
Sequential application of multiple transforms in a specified order, the most common way to build preprocessing and augmentation pipelines.
9
10
```python { .api }
11
class Compose(Transform):
12
"""
13
Sequential composition of multiple transforms.
14
15
Applies transforms in the specified order, passing the output of each
16
transform as input to the next. Essential for building preprocessing
17
and augmentation pipelines.
18
19
Parameters:
20
- transforms: Sequence of Transform instances to apply
21
"""
22
def __init__(self, transforms: Sequence[Transform]): ...
23
24
def __len__(self) -> int:
25
"""Return number of transforms in composition"""
26
27
def __getitem__(self, index: int) -> Transform:
28
"""Get transform at specified index"""
29
30
def __iter__(self):
31
"""Iterate through transforms"""
32
```
33
34
Usage example:
35
36
```python
37
import torchio as tio
38
39
# Create preprocessing pipeline
40
preprocessing = tio.Compose([
41
tio.ToCanonical(), # 1. Standardize orientation
42
tio.Resample(1), # 2. Resample to 1mm isotropic
43
tio.CropOrPad((128, 128, 64)), # 3. Standardize shape
44
tio.ZNormalization(), # 4. Normalize intensities
45
])
46
47
# Create augmentation pipeline
48
augmentation = tio.Compose([
49
tio.RandomFlip(axes=(0,)), # 1. Random horizontal flip
50
tio.RandomAffine( # 2. Random affine transform
51
scales=(0.9, 1.1),
52
degrees=(-5, 5)
53
),
54
tio.RandomNoise(std=(0, 0.1)), # 3. Add random noise
55
])
56
57
# Combine preprocessing and augmentation
58
full_pipeline = tio.Compose([
59
preprocessing,
60
augmentation
61
])
62
63
# Apply to subject
64
subject = tio.Subject(
65
t1=tio.ScalarImage('t1.nii.gz'),
66
seg=tio.LabelMap('segmentation.nii.gz')
67
)
68
69
transformed = full_pipeline(subject)
70
```
71
72
### Random Transform Selection
73
74
Randomly selects and applies one transform from a group of transforms, useful for introducing controlled randomness in augmentation pipelines.
75
76
```python { .api }
77
class OneOf(Transform):
78
"""
79
Randomly selects one transform from a group to apply.
80
81
Enables probabilistic application of different augmentation strategies,
82
allowing for varied augmentation while maintaining control over frequency.
83
84
Parameters:
85
- transforms: Dictionary mapping Transform instances to their probabilities,
86
or sequence of transforms (equal probability)
87
"""
88
def __init__(
89
self,
90
transforms: Union[dict[Transform, float], Sequence[Transform]]
91
): ...
92
```
93
94
Usage example:
95
96
```python
97
# Random selection between different augmentation strategies
98
intensity_augmentation = tio.OneOf({
99
tio.RandomNoise(std=(0, 0.1)): 0.3, # 30% chance
100
tio.RandomBlur(std=(0, 1)): 0.3, # 30% chance
101
tio.RandomGamma(log_gamma=(-0.3, 0.3)): 0.4, # 40% chance
102
})
103
104
# Random medical imaging artifacts
105
artifact_simulation = tio.OneOf([
106
tio.RandomMotion(degrees=2), # Equal probability
107
tio.RandomGhosting(intensity=(0.5, 1)), # Equal probability
108
tio.RandomSpike(num_spikes=(1, 3)), # Equal probability
109
])
110
111
# Combine in pipeline
112
pipeline = tio.Compose([
113
tio.ToCanonical(),
114
tio.RandomFlip(),
115
intensity_augmentation, # One of noise/blur/gamma
116
artifact_simulation, # One of motion/ghost/spike
117
])
118
119
subject = tio.Subject(t1=tio.ScalarImage('t1.nii.gz'))
120
augmented = pipeline(subject)
121
```
122
123
### Custom Lambda Transforms
124
125
Apply custom functions as transforms, enabling integration of user-defined processing operations into TorchIO pipelines.
126
127
```python { .api }
128
class Lambda(Transform):
129
"""
130
Apply a user-defined function as a transform.
131
132
Enables integration of custom operations into TorchIO pipelines
133
while maintaining transform history and compatibility.
134
135
Parameters:
136
- function: Callable that transforms tensors or subjects
137
- types_to_apply: Types to apply transform to (None for all)
138
"""
139
def __init__(
140
self,
141
function: Callable,
142
types_to_apply: tuple[type, ...] = None
143
): ...
144
```
145
146
Usage example:
147
148
```python
149
import torch
150
import torchio as tio
151
152
# Custom function for tensor processing
153
def custom_intensity_scaling(tensor):
154
"""Custom intensity scaling function"""
155
return tensor * 1.5 + 0.1
156
157
# Custom function for subject processing
158
def add_computed_field(subject):
159
"""Add computed field to subject"""
160
if 't1' in subject and 't2' in subject:
161
# Compute T1/T2 ratio
162
t1_data = subject['t1'].data
163
t2_data = subject['t2'].data
164
ratio = t1_data / (t2_data + 1e-6) # Avoid division by zero
165
166
# Create new image with ratio
167
ratio_image = tio.ScalarImage(tensor=ratio, affine=subject['t1'].affine)
168
subject['t1_t2_ratio'] = ratio_image
169
170
return subject
171
172
# Create lambda transforms
173
intensity_lambda = tio.Lambda(
174
function=custom_intensity_scaling,
175
types_to_apply=(tio.ScalarImage,) # Only apply to scalar images
176
)
177
178
ratio_lambda = tio.Lambda(function=add_computed_field)
179
180
# Use in pipeline
181
pipeline = tio.Compose([
182
tio.ZNormalization(),
183
intensity_lambda, # Apply custom scaling
184
ratio_lambda, # Compute T1/T2 ratio
185
tio.RandomFlip(),
186
])
187
188
subject = tio.Subject(
189
t1=tio.ScalarImage('t1.nii.gz'),
190
t2=tio.ScalarImage('t2.nii.gz')
191
)
192
193
processed = pipeline(subject)
194
# Now subject contains 't1_t2_ratio' image
195
```
196
197
### Advanced Composition Patterns
198
199
Examples of advanced composition patterns for complex processing pipelines.
200
201
#### Conditional Augmentation
202
203
```python
204
def conditional_augmentation_pipeline():
205
"""Create pipeline with conditional augmentation based on image properties"""
206
207
def age_based_augmentation(subject):
208
"""Apply different augmentation based on subject age"""
209
age = subject.get('age', 50) # Default age if not specified
210
211
if age < 30:
212
# Stronger augmentation for younger subjects
213
augment = tio.Compose([
214
tio.RandomAffine(degrees=(-10, 10), scales=(0.9, 1.1)),
215
tio.RandomElasticDeformation(max_displacement=7.5),
216
tio.RandomNoise(std=(0, 0.1)),
217
])
218
else:
219
# Milder augmentation for older subjects
220
augment = tio.Compose([
221
tio.RandomAffine(degrees=(-5, 5), scales=(0.95, 1.05)),
222
tio.RandomNoise(std=(0, 0.05)),
223
])
224
225
return augment(subject)
226
227
return tio.Compose([
228
tio.ToCanonical(),
229
tio.ZNormalization(),
230
tio.Lambda(age_based_augmentation),
231
])
232
```
233
234
#### Multi-Stage Processing
235
236
```python
237
def multi_stage_pipeline():
238
"""Create multi-stage processing pipeline"""
239
240
# Stage 1: Basic preprocessing
241
stage1 = tio.Compose([
242
tio.ToCanonical(),
243
tio.Resample(1),
244
tio.CropOrPad((128, 128, 64)),
245
])
246
247
# Stage 2: Intensity normalization
248
stage2 = tio.Compose([
249
tio.RescaleIntensity(out_min_max=(0, 1)),
250
tio.ZNormalization(),
251
])
252
253
# Stage 3: Augmentation (applied randomly)
254
stage3 = tio.OneOf({
255
tio.Compose([ # Spatial augmentation
256
tio.RandomFlip(),
257
tio.RandomAffine(degrees=(-5, 5))
258
]): 0.5,
259
tio.Compose([ # Intensity augmentation
260
tio.RandomNoise(std=(0, 0.1)),
261
tio.RandomGamma(log_gamma=(-0.3, 0.3))
262
]): 0.5,
263
})
264
265
return tio.Compose([stage1, stage2, stage3])
266
```
267
268
#### Pipeline with Quality Control
269
270
```python
271
def pipeline_with_qc():
272
"""Pipeline that includes quality control checks"""
273
274
def quality_check(subject):
275
"""Perform quality checks on processed subject"""
276
for key, image in subject.get_images(intensity_only=False):
277
# Check for extreme values
278
data = image.data
279
if torch.any(torch.isnan(data)) or torch.any(torch.isinf(data)):
280
raise ValueError(f"Invalid values detected in {key}")
281
282
# Check shape consistency
283
if data.shape[-3:] != (128, 128, 64):
284
raise ValueError(f"Unexpected shape in {key}: {data.shape}")
285
286
return subject
287
288
return tio.Compose([
289
tio.ToCanonical(),
290
tio.Resample(1),
291
tio.CropOrPad((128, 128, 64)),
292
tio.ZNormalization(),
293
tio.Lambda(quality_check), # QC after preprocessing
294
tio.RandomFlip(),
295
tio.RandomNoise(std=(0, 0.05)),
296
tio.Lambda(quality_check), # QC after augmentation
297
])
298
```
299
300
Usage of advanced patterns:
301
302
```python
303
# Create and use advanced pipelines
304
conditional_pipeline = conditional_augmentation_pipeline()
305
multi_stage = multi_stage_pipeline()
306
qc_pipeline = pipeline_with_qc()
307
308
subject = tio.Subject(
309
t1=tio.ScalarImage('t1.nii.gz'),
310
age=25 # Young subject for conditional augmentation
311
)
312
313
# Apply different processing strategies
314
processed_conditional = conditional_pipeline(subject)
315
processed_multi_stage = multi_stage(subject)
316
processed_with_qc = qc_pipeline(subject)
317
```
318
319
### Transform History and Debugging
320
321
All composed transforms maintain history for debugging and reproducibility:
322
323
```python
324
# Apply transform pipeline
325
subject = tio.Subject(t1=tio.ScalarImage('t1.nii.gz'))
326
pipeline = tio.Compose([
327
tio.ToCanonical(),
328
tio.RandomFlip(),
329
tio.RandomNoise(std=0.1),
330
])
331
332
transformed = pipeline(subject)
333
334
# Access transform history
335
print("Applied transforms:")
336
for transform_name, params in transformed.history:
337
print(f" {transform_name}: {params}")
338
339
# History enables reproducibility and debugging
340
```