or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

core-execution.mdfem.mdframework-integration.mdindex.mdkernel-programming.mdoptimization.mdrendering.mdtypes-arrays.mdutilities.md

index.mddocs/

0

# Warp

1

2

NVIDIA Warp is a Python framework for writing high-performance simulation and graphics code. It JIT compiles regular Python functions to efficient kernel code that can run on both CPU and GPU, making it ideal for spatial computing, physics simulation, perception, robotics, and geometry processing. Warp kernels are differentiable and integrate seamlessly with machine learning frameworks including PyTorch, JAX, and Paddle.

3

4

## Package Information

5

6

- **Package Name**: warp-lang

7

- **Language**: Python

8

- **Installation**: `pip install warp-lang`

9

- **GPU Requirements**: CUDA-capable NVIDIA GPU (minimum GeForce GTX 9xx) with driver 525+ for CUDA 12 or 470+ for CUDA 11

10

- **CPU Support**: x86-64 and ARMv8 on Windows, Linux, and macOS

11

12

## Core Imports

13

14

```python

15

import warp as wp

16

```

17

18

Access specific components:

19

20

```python

21

import warp.fem as fem

22

import warp.sim as sim # Deprecated in v1.10

23

import warp.render as render

24

import warp.optim as optim

25

```

26

27

## Basic Usage

28

29

```python

30

import warp as wp

31

import numpy as np

32

33

# Initialize Warp

34

wp.init()

35

36

# Define a kernel (JIT compiled function)

37

@wp.kernel

38

def add_kernel(a: wp.array(dtype=float),

39

b: wp.array(dtype=float),

40

c: wp.array(dtype=float)):

41

i = wp.tid() # thread index

42

c[i] = a[i] + b[i]

43

44

# Create arrays and run kernel

45

n = 1000000

46

device = wp.get_device("cuda:0") # or "cpu"

47

48

a = wp.zeros(n, dtype=float, device=device)

49

b = wp.ones(n, dtype=float, device=device)

50

c = wp.empty(n, dtype=float, device=device)

51

52

# Launch kernel

53

wp.launch(add_kernel, dim=n, inputs=[a, b, c], device=device)

54

55

# Copy result back

56

result = c.numpy()

57

```

58

59

## Architecture

60

61

Warp's architecture centers around **kernel functions** - Python functions decorated with `@wp.kernel` that get JIT compiled to efficient CUDA or CPU code:

62

63

- **Kernels**: GPU/CPU parallel functions with automatic differentiation support

64

- **Arrays**: Multi-dimensional data containers with device-aware memory management

65

- **Types**: Rich type system including primitives, vectors, matrices, quaternions, and geometry types

66

- **Context**: Device management, memory pools, streams, and execution control

67

- **Interop**: Seamless integration with NumPy, PyTorch, JAX, Paddle, and DLPack

68

69

This design enables writing maintainable Python code that executes with near-native performance for compute-intensive spatial and graphics programming tasks.

70

71

## Capabilities

72

73

### Core Execution and Device Management

74

75

Essential functions for initializing Warp, managing devices, launching kernels, and controlling execution. These form the foundation for all Warp programs.

76

77

```python { .api }

78

def init() -> None: ...

79

def get_device(device_id: str) -> Device: ...

80

def set_device(device: Device) -> None: ...

81

def launch(kernel: Kernel, dim: int, inputs: list, device: Device = None) -> None: ...

82

def synchronize() -> None: ...

83

```

84

85

[Core Execution](./core-execution.md)

86

87

### Type System and Arrays

88

89

Comprehensive type system including primitive types, vectors, matrices, quaternions, transforms, and multi-dimensional arrays with device-aware memory management.

90

91

```python { .api }

92

class array:

93

def __init__(self, data=None, dtype=None, shape=None, device=None): ...

94

def numpy(self) -> np.ndarray: ...

95

96

# Vector types

97

vec2 = typing.Type[Vector2]

98

vec3 = typing.Type[Vector3]

99

vec4 = typing.Type[Vector4]

100

101

# Matrix types

102

mat22 = typing.Type[Matrix22]

103

mat33 = typing.Type[Matrix33]

104

mat44 = typing.Type[Matrix44]

105

```

106

107

[Types and Arrays](./types-arrays.md)

108

109

### Kernel Programming and Built-in Functions

110

111

Kernel decorators, built-in mathematical functions, and programming constructs for writing high-performance GPU/CPU code within Warp kernels.

112

113

```python { .api }

114

def kernel(func: Callable) -> Kernel: ...

115

def func(func: Callable) -> Function: ...

116

117

# Built-in functions available in kernels

118

def tid() -> int: ...

119

def min(a: Scalar, b: Scalar) -> Scalar: ...

120

def max(a: Scalar, b: Scalar) -> Scalar: ...

121

def abs(x: Scalar) -> Scalar: ...

122

def sqrt(x: Float) -> Float: ...

123

```

124

125

[Kernel Programming](./kernel-programming.md)

126

127

### Finite Element Method (FEM)

128

129

Comprehensive finite element framework with geometry definitions, function spaces, quadrature, field operations, and integration capabilities for solving PDEs.

130

131

```python { .api }

132

# Geometry

133

class Grid2D: ...

134

class Grid3D: ...

135

class Tetmesh: ...

136

class Hexmesh: ...

137

138

# Function spaces

139

def make_polynomial_space(geometry: Geometry, degree: int) -> FunctionSpace: ...

140

def integrate(integrand: Callable, domain: Domain) -> Field: ...

141

```

142

143

[Finite Element Method](./fem.md)

144

145

### Framework Interoperability

146

147

Seamless data exchange and integration with popular machine learning and scientific computing frameworks including PyTorch, JAX, Paddle, and DLPack.

148

149

```python { .api }

150

def from_torch(tensor) -> array: ...

151

def to_torch(arr: array): ...

152

def from_jax(array) -> array: ...

153

def to_jax(arr: array): ...

154

def from_numpy(array: np.ndarray) -> array: ...

155

```

156

157

[Framework Integration](./framework-integration.md)

158

159

### Optimization

160

161

Gradient-based optimizers for machine learning workflows, including Adam and SGD optimizers that work with Warp's differentiable kernels.

162

163

```python { .api }

164

class Adam:

165

def __init__(self, params: list, lr: float = 0.001): ...

166

def step(self) -> None: ...

167

168

class SGD:

169

def __init__(self, params: list, lr: float = 0.01): ...

170

def step(self) -> None: ...

171

```

172

173

[Optimization](./optimization.md)

174

175

### Rendering

176

177

OpenGL and USD-based rendering capabilities for visualizing simulation results and creating graphics output.

178

179

```python { .api }

180

class OpenGLRenderer:

181

def __init__(self, width: int, height: int): ...

182

def render(self, mesh: Mesh) -> None: ...

183

184

class UsdRenderer:

185

def __init__(self, stage_path: str): ...

186

def save(self, path: str) -> None: ...

187

```

188

189

[Rendering](./rendering.md)

190

191

### Utilities and Profiling

192

193

Performance profiling, context management, timing utilities, and helper functions for development and debugging.

194

195

```python { .api }

196

class ScopedTimer:

197

def __init__(self, name: str): ...

198

def __enter__(self): ...

199

def __exit__(self, *args): ...

200

201

def timing_begin() -> None: ...

202

def timing_end() -> float: ...

203

```

204

205

[Utilities](./utilities.md)

206

207

## Types

208

209

```python { .api }

210

# Core device and execution types

211

class Device:

212

def __str__(self) -> str: ...

213

214

class Kernel:

215

def __call__(self, *args, **kwargs): ...

216

217

class Function:

218

def __call__(self, *args, **kwargs): ...

219

220

# Array types

221

class array:

222

shape: tuple

223

dtype: type

224

device: Device

225

226

def numpy(self) -> np.ndarray: ...

227

def __getitem__(self, key): ...

228

def __setitem__(self, key, value): ...

229

230

# Geometry types for spatial computing

231

class Mesh:

232

def __init__(self, vertices: array, indices: array): ...

233

234

class Volume:

235

def __init__(self, data: array): ...

236

237

class Bvh:

238

def __init__(self, mesh: Mesh): ...

239

240

# Type annotations for kernel parameters

241

Int = typing.TypeVar('Int')

242

Float = typing.TypeVar('Float')

243

Scalar = typing.TypeVar('Scalar')

244

```