or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

core-execution.mdfem.mdframework-integration.mdindex.mdkernel-programming.mdoptimization.mdrendering.mdtypes-arrays.mdutilities.md
tile.json

tessl/pypi-warp-lang

A Python framework for high-performance simulation and graphics programming that JIT compiles Python functions to efficient GPU/CPU kernel code.

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/warp-lang@1.8.x

To install, run

npx @tessl/cli install tessl/pypi-warp-lang@1.8.0

index.mddocs/

Warp

NVIDIA Warp is a Python framework for writing high-performance simulation and graphics code. It JIT compiles regular Python functions to efficient kernel code that can run on both CPU and GPU, making it ideal for spatial computing, physics simulation, perception, robotics, and geometry processing. Warp kernels are differentiable and integrate seamlessly with machine learning frameworks including PyTorch, JAX, and Paddle.

Package Information

  • Package Name: warp-lang
  • Language: Python
  • Installation: pip install warp-lang
  • GPU Requirements: CUDA-capable NVIDIA GPU (minimum GeForce GTX 9xx) with driver 525+ for CUDA 12 or 470+ for CUDA 11
  • CPU Support: x86-64 and ARMv8 on Windows, Linux, and macOS

Core Imports

import warp as wp

Access specific components:

import warp.fem as fem
import warp.sim as sim  # Deprecated in v1.10
import warp.render as render
import warp.optim as optim

Basic Usage

import warp as wp
import numpy as np

# Initialize Warp
wp.init()

# Define a kernel (JIT compiled function)
@wp.kernel
def add_kernel(a: wp.array(dtype=float), 
               b: wp.array(dtype=float),
               c: wp.array(dtype=float)):
    i = wp.tid()  # thread index
    c[i] = a[i] + b[i]

# Create arrays and run kernel
n = 1000000
device = wp.get_device("cuda:0")  # or "cpu"

a = wp.zeros(n, dtype=float, device=device)
b = wp.ones(n, dtype=float, device=device)
c = wp.empty(n, dtype=float, device=device)

# Launch kernel
wp.launch(add_kernel, dim=n, inputs=[a, b, c], device=device)

# Copy result back
result = c.numpy()

Architecture

Warp's architecture centers around kernel functions - Python functions decorated with @wp.kernel that get JIT compiled to efficient CUDA or CPU code:

  • Kernels: GPU/CPU parallel functions with automatic differentiation support
  • Arrays: Multi-dimensional data containers with device-aware memory management
  • Types: Rich type system including primitives, vectors, matrices, quaternions, and geometry types
  • Context: Device management, memory pools, streams, and execution control
  • Interop: Seamless integration with NumPy, PyTorch, JAX, Paddle, and DLPack

This design enables writing maintainable Python code that executes with near-native performance for compute-intensive spatial and graphics programming tasks.

Capabilities

Core Execution and Device Management

Essential functions for initializing Warp, managing devices, launching kernels, and controlling execution. These form the foundation for all Warp programs.

def init() -> None: ...
def get_device(device_id: str) -> Device: ...
def set_device(device: Device) -> None: ...
def launch(kernel: Kernel, dim: int, inputs: list, device: Device = None) -> None: ...
def synchronize() -> None: ...

Core Execution

Type System and Arrays

Comprehensive type system including primitive types, vectors, matrices, quaternions, transforms, and multi-dimensional arrays with device-aware memory management.

class array:
    def __init__(self, data=None, dtype=None, shape=None, device=None): ...
    def numpy(self) -> np.ndarray: ...

# Vector types
vec2 = typing.Type[Vector2]
vec3 = typing.Type[Vector3]
vec4 = typing.Type[Vector4]

# Matrix types  
mat22 = typing.Type[Matrix22]
mat33 = typing.Type[Matrix33]
mat44 = typing.Type[Matrix44]

Types and Arrays

Kernel Programming and Built-in Functions

Kernel decorators, built-in mathematical functions, and programming constructs for writing high-performance GPU/CPU code within Warp kernels.

def kernel(func: Callable) -> Kernel: ...
def func(func: Callable) -> Function: ...

# Built-in functions available in kernels
def tid() -> int: ...
def min(a: Scalar, b: Scalar) -> Scalar: ...
def max(a: Scalar, b: Scalar) -> Scalar: ...
def abs(x: Scalar) -> Scalar: ...
def sqrt(x: Float) -> Float: ...

Kernel Programming

Finite Element Method (FEM)

Comprehensive finite element framework with geometry definitions, function spaces, quadrature, field operations, and integration capabilities for solving PDEs.

# Geometry
class Grid2D: ...
class Grid3D: ...
class Tetmesh: ...
class Hexmesh: ...

# Function spaces
def make_polynomial_space(geometry: Geometry, degree: int) -> FunctionSpace: ...
def integrate(integrand: Callable, domain: Domain) -> Field: ...

Finite Element Method

Framework Interoperability

Seamless data exchange and integration with popular machine learning and scientific computing frameworks including PyTorch, JAX, Paddle, and DLPack.

def from_torch(tensor) -> array: ...
def to_torch(arr: array): ...
def from_jax(array) -> array: ...
def to_jax(arr: array): ...
def from_numpy(array: np.ndarray) -> array: ...

Framework Integration

Optimization

Gradient-based optimizers for machine learning workflows, including Adam and SGD optimizers that work with Warp's differentiable kernels.

class Adam:
    def __init__(self, params: list, lr: float = 0.001): ...
    def step(self) -> None: ...

class SGD:
    def __init__(self, params: list, lr: float = 0.01): ...
    def step(self) -> None: ...

Optimization

Rendering

OpenGL and USD-based rendering capabilities for visualizing simulation results and creating graphics output.

class OpenGLRenderer:
    def __init__(self, width: int, height: int): ...
    def render(self, mesh: Mesh) -> None: ...

class UsdRenderer:
    def __init__(self, stage_path: str): ...
    def save(self, path: str) -> None: ...

Rendering

Utilities and Profiling

Performance profiling, context management, timing utilities, and helper functions for development and debugging.

class ScopedTimer:
    def __init__(self, name: str): ...
    def __enter__(self): ...
    def __exit__(self, *args): ...

def timing_begin() -> None: ...
def timing_end() -> float: ...

Utilities

Types

# Core device and execution types
class Device:
    def __str__(self) -> str: ...

class Kernel:
    def __call__(self, *args, **kwargs): ...

class Function:
    def __call__(self, *args, **kwargs): ...

# Array types
class array:
    shape: tuple
    dtype: type
    device: Device
    
    def numpy(self) -> np.ndarray: ...
    def __getitem__(self, key): ...
    def __setitem__(self, key, value): ...

# Geometry types for spatial computing
class Mesh:
    def __init__(self, vertices: array, indices: array): ...

class Volume:
    def __init__(self, data: array): ...

class Bvh:
    def __init__(self, mesh: Mesh): ...

# Type annotations for kernel parameters
Int = typing.TypeVar('Int')
Float = typing.TypeVar('Float') 
Scalar = typing.TypeVar('Scalar')