tessl install tessl/pypi-dcm2niix@1.0.5Command-line application that converts medical imaging data from DICOM format to NIfTI format with BIDS support
dcm2niix supports batch processing of multiple DICOM directories using YAML configuration files via the dcm2niibatch executable. This allows automated conversion of multiple series with different settings in a single execution.
Note: dcm2niibatch is a separate executable from dcm2niix. It may not be included in all distribution methods (e.g., pip install). Check your installation to verify availability, or build from source with the BATCH_VERSION CMake option enabled.
Process multiple DICOM folders using a YAML configuration file.
dcm2niibatch <config.yml>The batch processor reads conversion settings and file paths from a YAML configuration file and executes dcm2niix for each specified input directory.
Requirements:
dcm2niibatch executable must be available in PATHDefine batch conversion jobs with a YAML configuration file.
Options:
isGz: <boolean> # Enable gzip compression
isFlipY: <boolean> # Flip Y axis (default: true)
isVerbose: <boolean> # Verbose output
isCreateBIDS: <boolean> # Create BIDS JSON sidecars
isOnlySingleFile: <boolean> # Single file mode
Files:
-
in_dir: <path> # Input DICOM directory
out_dir: <path> # Output directory
filename: <string> # Output filename pattern
-
in_dir: <path>
out_dir: <path>
filename: <string>Global settings applied to all conversions in the batch.
Options:
isGz: <boolean>isGz: Enable gzip compression for output NIfTI files.
true = compress output to .nii.gzfalse = uncompressed .nii filesEquivalent CLI: -z y (true) or -z n (false)
Trade-offs:
Options:
isFlipY: <boolean>isFlipY: Flip Y-axis to convert DICOM LPS to NIfTI RAS+ coordinates.
true = flip Y-axis (default, neurological convention)false = don't flip Y-axisEquivalent CLI: -y n (true, default) or -y y (false)
Recommendation: Always use true for standard NIfTI orientation.
Options:
isVerbose: <boolean>isVerbose: Control output verbosity.
true = verbose output with detailed informationfalse = minimal output (default)Equivalent CLI: -v 1 (true) or -v 0 (false)
Use Cases:
true: Debugging, monitoring progressfalse: Production batch jobs, log filesOptions:
isCreateBIDS: <boolean>isCreateBIDS: Generate BIDS-compliant JSON sidecars.
true = create JSON metadata filesfalse = no JSON sidecarsEquivalent CLI: -b y (true) or -b n (false)
BIDS Contents:
Options:
isOnlySingleFile: <boolean>isOnlySingleFile: Single file conversion mode.
true = convert only the specified DICOM filefalse = search folder for complete series (default)Equivalent CLI: -s y (true) or -s n (false)
Warning: May produce incomplete output if true and file is part of multi-slice series.
List of input/output directory pairs for batch processing.
Files:
-
in_dir: <path>
out_dir: <path>
filename: <string>Each file entry specifies:
Path Requirements:
/data/subject01/dicom) or relative (./subject01/dicom)Filename Patterns: All dcm2niix placeholders supported:
%p = Protocol name%s = Series number%i = Patient ID%t = Study timeMultiple file entries can be specified, each prefixed with a dash (-).
Options:
isGz: true
isFlipY: true
isVerbose: false
isCreateBIDS: true
isOnlySingleFile: false
Files:
-
in_dir: /data/subject01/dicom
out_dir: /data/subject01/nifti
filename: sub-01_%p_%s
-
in_dir: /data/subject02/dicom
out_dir: /data/subject02/nifti
filename: sub-02_%p_%sExecution:
dcm2niibatch batch_config.ymlResult:
Options:
isGz: true
isFlipY: true
isVerbose: false
isCreateBIDS: true
isOnlySingleFile: false
Files:
-
in_dir: /raw/sub-001/session-1
out_dir: /processed/sub-001/session-1
filename: sub-001_ses-1_%p_%s
-
in_dir: /raw/sub-001/session-2
out_dir: /processed/sub-001/session-2
filename: sub-001_ses-2_%p_%s
-
in_dir: /raw/sub-002/session-1
out_dir: /processed/sub-002/session-1
filename: sub-002_ses-1_%p_%sUse Case: Longitudinal study with multiple sessions per subject.
Options:
isGz: false
isFlipY: true
isVerbose: true
isCreateBIDS: false
isOnlySingleFile: false
Files:
-
in_dir: /input/T1_scans
out_dir: /output/T1_nifti
filename: T1_%s
-
in_dir: /input/T2_scans
out_dir: /output/T2_nifti
filename: T2_%sUse Case: Fast I/O for immediate processing pipeline, no metadata needed.
Options:
isGz: true
isFlipY: true
isVerbose: false
isCreateBIDS: true
isOnlySingleFile: false
Files:
-
in_dir: /study/sub-101/anat
out_dir: /bids/sub-101/anat
filename: sub-101_T1w
-
in_dir: /study/sub-101/func
out_dir: /bids/sub-101/func
filename: sub-101_task-rest_bold
-
in_dir: /study/sub-102/anat
out_dir: /bids/sub-102/anat
filename: sub-102_T1wUse Case: BIDS-compliant neuroimaging dataset with proper naming conventions.
dcm2niibatch batch_config.ymlRuns dcm2niix for each file entry using the global options specified.
Process:
Create my_batch.yml:
Options:
isGz: true
isFlipY: true
isVerbose: false
isCreateBIDS: true
isOnlySingleFile: false
Files:
-
in_dir: /data/patient001/dicom
out_dir: /data/patient001/nifti
filename: patient001_%p
-
in_dir: /data/patient002/dicom
out_dir: /data/patient002/nifti
filename: patient002_%pdcm2niibatch my_batch.ymlOutput:
# Check output directories
ls /data/patient001/nifti
ls /data/patient002/nifti
# Verify BIDS JSON creation
find /data/patient001/nifti -name "*.json"While dcm2niibatch is a separate binary, you can generate configuration files and execute batch processing from Python:
import yaml
from pathlib import Path
def create_batch_config(subjects, input_root, output_root, config_file):
"""Generate YAML configuration for batch processing."""
config = {
"Options": {
"isGz": True,
"isFlipY": True,
"isVerbose": False,
"isCreateBIDS": True,
"isOnlySingleFile": False
},
"Files": []
}
for subject_id in subjects:
entry = {
"in_dir": str(Path(input_root) / subject_id),
"out_dir": str(Path(output_root) / subject_id),
"filename": f"sub-{subject_id}_%p_%s"
}
config["Files"].append(entry)
with open(config_file, "w") as f:
yaml.dump(config, f, default_flow_style=False)
return config_fileimport subprocess
from pathlib import Path
def run_batch_conversion(config_file):
"""Execute dcm2niibatch with configuration file."""
result = subprocess.run(
["dcm2niibatch", str(config_file)],
capture_output=True,
text=True
)
return result.returncode == 0, result.stdout, result.stderrimport yaml
import subprocess
from pathlib import Path
def batch_convert_subjects(subjects, input_root, output_root):
"""Complete batch conversion workflow."""
# Generate configuration
config = {
"Options": {
"isGz": True,
"isFlipY": True,
"isVerbose": False,
"isCreateBIDS": True,
"isOnlySingleFile": False
},
"Files": [
{
"in_dir": str(Path(input_root) / subj),
"out_dir": str(Path(output_root) / subj),
"filename": f"sub-{subj}_%p_%s"
}
for subj in subjects
]
}
# Write configuration file
config_file = Path("/tmp/batch_config.yml")
with open(config_file, "w") as f:
yaml.dump(config, f, default_flow_style=False)
# Execute batch conversion
result = subprocess.run(
["dcm2niibatch", str(config_file)],
capture_output=True,
text=True
)
# Clean up
config_file.unlink()
return result.returncode == 0
# Usage
subjects = ["001", "002", "003"]
success = batch_convert_subjects(subjects, "/data/dicom", "/data/nifti")If dcm2niibatch is not available, implement batch processing using dcm2niix Python API:
from dcm2niix import main
from pathlib import Path
from typing import List, Dict
def batch_process(
conversions: List[Dict[str, str]],
compress: bool = True,
bids: bool = True,
verbose: bool = False
) -> Dict[str, bool]:
"""
Batch process multiple DICOM directories using Python API.
Args:
conversions: List of dicts with keys: in_dir, out_dir, filename
compress: Enable gzip compression
bids: Generate BIDS JSON sidecars
verbose: Enable verbose output
Returns:
Dictionary mapping input_dir to success boolean
"""
results = {}
for conv in conversions:
in_dir = Path(conv["in_dir"])
out_dir = Path(conv["out_dir"])
filename = conv["filename"]
# Create output directory
out_dir.mkdir(parents=True, exist_ok=True)
# Build arguments
args = ["-f", filename, "-o", str(out_dir)]
if compress:
args.extend(["-z", "y"])
if bids:
args.extend(["-b", "y"])
if verbose:
args.extend(["-v", "1"])
args.append(str(in_dir))
# Execute conversion
exit_code = main(args)
results[str(in_dir)] = exit_code in [0, 8]
return results
# Usage
conversions = [
{
"in_dir": "/data/sub-001/dicom",
"out_dir": "/data/sub-001/nifti",
"filename": "sub-001_%p_%s"
},
{
"in_dir": "/data/sub-002/dicom",
"out_dir": "/data/sub-002/nifti",
"filename": "sub-002_%p_%s"
}
]
results = batch_process(conversions)
print(f"Successful: {sum(results.values())}/{len(results)}")For Parallel Processing:
from dcm2niix import main
from concurrent.futures import ProcessPoolExecutor
def parallel_batch_convert(conversions, max_workers=4):
"""Parallel batch conversion using Python multiprocessing."""
def convert_single(conv):
out_dir = Path(conv["out_dir"])
out_dir.mkdir(parents=True, exist_ok=True)
exit_code = main([
"-z", "y",
"-b", "y",
"-f", conv["filename"],
"-o", str(out_dir),
conv["in_dir"]
])
return conv["in_dir"], exit_code
with ProcessPoolExecutor(max_workers=max_workers) as executor:
results = dict(executor.map(convert_single, conversions))
return resultsFor Per-Directory Options:
from dcm2niix import main
def flexible_batch_convert(conversions):
"""Batch convert with per-directory options."""
results = {}
for conv in conversions:
args = conv.get("args", []) # Custom args per conversion
args.extend(["-o", conv["out_dir"], conv["in_dir"]])
exit_code = main(args)
results[conv["in_dir"]] = exit_code in [0, 8]
return results
# Usage with custom options per subject
conversions = [
{
"in_dir": "/data/sub-001/dicom",
"out_dir": "/data/sub-001/nifti",
"args": ["-z", "y", "-b", "y", "-f", "sub-001_%p"]
},
{
"in_dir": "/data/sub-002/dicom",
"out_dir": "/data/sub-002/nifti",
"args": ["-z", "n", "-b", "n", "-f", "sub-002_%p"] # Different options
}
]
results = flexible_batch_convert(conversions)Options:
isGz: false # Skip compression for speed
isFlipY: true
isVerbose: false # Minimize output
isCreateBIDS: false # Skip JSON if not needed
isOnlySingleFile: false
Files:
# ... file entriesOptimization Tips:
isGz: false) for fastest I/OisCreateBIDS: false) if metadata not neededFor very large datasets, process in smaller batches:
def chunked_batch_process(all_conversions, chunk_size=10):
"""Process conversions in chunks to manage memory."""
results = {}
for i in range(0, len(all_conversions), chunk_size):
chunk = all_conversions[i:i+chunk_size]
chunk_results = batch_process(chunk)
results.update(chunk_results)
# Optional: Clean up, log progress
print(f"Completed {min(i+chunk_size, len(all_conversions))}/{len(all_conversions)}")
return results#)# Check if available
which dcm2niibatch
# If not found, use Python API instead# Validate YAML syntax
python -c "import yaml; yaml.safe_load(open('config.yml'))"# Verify parent directories exist
from pathlib import Path
for conv in conversions:
out_dir = Path(conv["out_dir"])
out_dir.parent.mkdir(parents=True, exist_ok=True)Options:
isVerbose: true # Enable to see detailed errors