0
# Hardware Acceleration
1
2
Support for various hardware accelerators including CPU, CUDA GPUs, Apple Metal Performance Shaders, and Google TPUs with automatic device detection and optimization.
3
4
## Capabilities
5
6
### CPU Accelerator
7
8
CPU-based training for development, debugging, and CPU-only environments.
9
10
```python { .api }
11
class CPUAccelerator:
12
def setup_device(self, device: torch.device) -> None:
13
"""Set up CPU device for training."""
14
15
def get_device_stats(self, device: torch.device) -> Dict[str, Any]:
16
"""Get CPU device statistics."""
17
18
@staticmethod
19
def parse_devices(devices: Union[int, str, List[int]]) -> int:
20
"""Parse CPU device specification."""
21
22
@staticmethod
23
def get_parallel_devices(devices: int) -> List[torch.device]:
24
"""Get list of CPU devices for parallel training."""
25
26
@staticmethod
27
def auto_device_count() -> int:
28
"""Get number of available CPU cores."""
29
30
@staticmethod
31
def is_available() -> bool:
32
"""Check if CPU is available."""
33
```
34
35
### CUDA Accelerator
36
37
NVIDIA GPU acceleration with CUDA support for high-performance training.
38
39
```python { .api }
40
class CUDAAccelerator:
41
def setup_device(self, device: torch.device) -> None:
42
"""Set up CUDA device for training."""
43
44
def get_device_stats(self, device: torch.device) -> Dict[str, Any]:
45
"""Get CUDA device statistics including memory usage."""
46
47
@staticmethod
48
def parse_devices(devices: Union[int, str, List[int]]) -> List[int]:
49
"""Parse CUDA device specification."""
50
51
@staticmethod
52
def get_parallel_devices(devices: List[int]) -> List[torch.device]:
53
"""Get list of CUDA devices for parallel training."""
54
55
@staticmethod
56
def auto_device_count() -> int:
57
"""Get number of available CUDA devices."""
58
59
@staticmethod
60
def is_available() -> bool:
61
"""Check if CUDA is available."""
62
63
def find_usable_cuda_devices(num_gpus: int = -1) -> List[int]:
64
"""
65
Find usable CUDA devices.
66
67
Args:
68
num_gpus: Number of GPUs to find (-1 for all)
69
70
Returns:
71
List of usable CUDA device IDs
72
"""
73
```
74
75
### Apple Metal Performance Shaders (MPS)
76
77
Apple Silicon GPU acceleration for M1/M2 Macs.
78
79
```python { .api }
80
class MPSAccelerator:
81
def setup_device(self, device: torch.device) -> None:
82
"""Set up MPS device for training."""
83
84
def get_device_stats(self, device: torch.device) -> Dict[str, Any]:
85
"""Get MPS device statistics."""
86
87
@staticmethod
88
def parse_devices(devices: Union[int, str, List[int]]) -> int:
89
"""Parse MPS device specification."""
90
91
@staticmethod
92
def get_parallel_devices(devices: int) -> List[torch.device]:
93
"""Get MPS device for training."""
94
95
@staticmethod
96
def auto_device_count() -> int:
97
"""Get number of available MPS devices."""
98
99
@staticmethod
100
def is_available() -> bool:
101
"""Check if MPS is available."""
102
```
103
104
### XLA Accelerator
105
106
Google TPU acceleration using XLA compilation.
107
108
```python { .api }
109
class XLAAccelerator:
110
def setup_device(self, device: torch.device) -> None:
111
"""Set up XLA device for training."""
112
113
def get_device_stats(self, device: torch.device) -> Dict[str, Any]:
114
"""Get XLA device statistics."""
115
116
@staticmethod
117
def parse_devices(devices: Union[int, str, List[int]]) -> List[int]:
118
"""Parse XLA device specification."""
119
120
@staticmethod
121
def get_parallel_devices(devices: List[int]) -> List[torch.device]:
122
"""Get list of XLA devices for parallel training."""
123
124
@staticmethod
125
def auto_device_count() -> int:
126
"""Get number of available XLA devices."""
127
128
@staticmethod
129
def is_available() -> bool:
130
"""Check if XLA is available."""
131
```