or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

advanced-features.mddevices-distributed.mdindex.mdmathematical-functions.mdneural-networks.mdtensor-operations.mdtraining.md
tile.json

tessl/pypi-torch

Deep learning framework providing tensor computation with GPU acceleration and dynamic neural networks with automatic differentiation

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/torch@2.8.x

To install, run

npx @tessl/cli install tessl/pypi-torch@2.8.0

index.mddocs/

PyTorch

PyTorch is a comprehensive deep learning framework that provides tensor computation with strong GPU acceleration and dynamic neural networks built on a tape-based autograd system. It offers a Python-first approach to machine learning, allowing researchers and developers to build and train neural networks using familiar Python syntax while maintaining high performance through optimized C++ and CUDA backends.

Package Information

  • Package Name: torch
  • Language: Python
  • Installation: pip install torch
  • GPU Support: pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

Core Imports

import torch

Common additional imports:

import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import DataLoader, Dataset

Basic Usage

import torch
import torch.nn as nn
import torch.optim as optim

# Create tensors
x = torch.tensor([[1.0, 2.0], [3.0, 4.0]], requires_grad=True)
y = torch.tensor([[5.0], [6.0]])

# Define a simple neural network
class SimpleNet(nn.Module):
    def __init__(self):
        super(SimpleNet, self).__init__()
        self.linear = nn.Linear(2, 1)
    
    def forward(self, x):
        return self.linear(x)

# Initialize model, loss function, and optimizer
model = SimpleNet()
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)

# Forward pass
output = model(x)
loss = criterion(output, y)

# Backward pass and optimization
optimizer.zero_grad()
loss.backward()
optimizer.step()

print(f"Loss: {loss.item()}")
print(f"Gradients: {x.grad}")

Architecture

PyTorch's design centers around dynamic computation graphs and the autograd system:

  • Tensors: Multi-dimensional arrays with automatic differentiation support
  • Autograd: Automatic differentiation engine that records operations for backpropagation
  • nn.Module: Base class for neural network components with parameter management
  • Optimizers: Algorithms for updating model parameters during training
  • Device Abstraction: Unified interface for CPU, CUDA, MPS, and XPU backends
  • JIT Compilation: TorchScript for optimizing models for deployment

This architecture enables rapid prototyping in research while scaling to production deployments across various hardware platforms.

Capabilities

Core Tensor Operations

Fundamental tensor creation, manipulation, and mathematical operations. Tensors are the primary data structure supporting automatic differentiation and GPU acceleration.

def tensor(data, *, dtype=None, device=None, requires_grad=False, pin_memory=False) -> Tensor: ...
def zeros(*size, dtype=None, device=None, requires_grad=False) -> Tensor: ...
def ones(*size, dtype=None, device=None, requires_grad=False) -> Tensor: ...
def rand(*size, dtype=None, device=None, requires_grad=False) -> Tensor: ...
def randn(*size, dtype=None, device=None, requires_grad=False) -> Tensor: ...
def arange(start=0, end, step=1, *, dtype=None, device=None, requires_grad=False) -> Tensor: ...
def linspace(start, end, steps, *, dtype=None, device=None, requires_grad=False) -> Tensor: ...

Tensor Operations

Neural Networks

Complete neural network building blocks including layers, activation functions, loss functions, and containers for building deep learning models.

class Module:
    def forward(self, *input): ...
    def parameters(self, recurse=True): ...
    def named_parameters(self, prefix='', recurse=True): ...
    def zero_grad(self, set_to_none=False): ...

class Linear(Module):
    def __init__(self, in_features: int, out_features: int, bias: bool = True): ...

class Conv2d(Module): 
    def __init__(self, in_channels: int, out_channels: int, kernel_size, stride=1, padding=0): ...

class ReLU(Module):
    def __init__(self, inplace: bool = False): ...

class CrossEntropyLoss(Module):
    def __init__(self, weight=None, size_average=None, ignore_index=-100): ...

Neural Networks

Training and Optimization

Optimizers, learning rate schedulers, and training utilities for model optimization and parameter updates.

class Optimizer:
    def step(self, closure=None): ...
    def zero_grad(self, set_to_none=False): ...

class SGD(Optimizer):
    def __init__(self, params, lr, momentum=0, dampening=0, weight_decay=0): ...

class Adam(Optimizer):
    def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0): ...

class StepLR:
    def __init__(self, optimizer, step_size, gamma=0.1): ...
    def step(self, epoch=None): ...

Training and Optimization

Mathematical Functions

Comprehensive mathematical operations including linear algebra, FFT, special functions, and statistical operations.

def matmul(input: Tensor, other: Tensor) -> Tensor: ...
def dot(input: Tensor, other: Tensor) -> Tensor: ...
def sum(input: Tensor, dim=None, keepdim=False, *, dtype=None) -> Tensor: ...
def mean(input: Tensor, dim=None, keepdim=False, *, dtype=None) -> Tensor: ...
def std(input: Tensor, dim=None, keepdim=False, *, dtype=None) -> Tensor: ...
def max(input: Tensor, dim=None, keepdim=False) -> Tensor: ...
def min(input: Tensor, dim=None, keepdim=False) -> Tensor: ...

Mathematical Functions

Device and Distributed Computing

Device management, CUDA operations, distributed training, and multi-GPU support for scaling deep learning workloads.

def cuda.is_available() -> bool: ...
def cuda.device_count() -> int: ...
def cuda.get_device_name(device=None) -> str: ...
def cuda.set_device(device): ...

class DistributedDataParallel(Module):
    def __init__(self, module, device_ids=None, output_device=None): ...

def distributed.init_process_group(backend, init_method=None, timeout=default_pg_timeout): ...
def distributed.all_reduce(tensor, op=ReduceOp.SUM, group=None, async_op=False): ...

Device and Distributed Computing

Advanced Features

JIT compilation, model export, graph transformations, quantization, and deployment utilities for optimizing and deploying models.

def jit.script(obj, optimize=None, _frames_up=0, _rcb=None): ...
def jit.trace(func, example_inputs, optimize=None, check_trace=True): ...

def export.export(mod: torch.nn.Module, args, kwargs=None, *, dynamic_shapes=None): ...

def compile(model=None, *, fullgraph=False, dynamic=None, backend="inductor"): ...

def quantization.quantize_dynamic(model, qconfig_spec=None, dtype=torch.qint8): ...

Advanced Features

Core Types

class Tensor:
    """Multi-dimensional array with automatic differentiation support."""
    def __init__(self, data, *, dtype=None, device=None, requires_grad=False): ...
    def backward(self, gradient=None, retain_graph=None, create_graph=False): ...
    def detach(self) -> Tensor: ...
    def numpy(self) -> numpy.ndarray: ...
    def cuda(self, device=None, non_blocking=False) -> Tensor: ...
    def cpu(self) -> Tensor: ...
    def to(self, *args, **kwargs) -> Tensor: ...
    def size(self, dim=None): ...
    def shape(self) -> torch.Size: ...
    def dim(self) -> int: ...
    def numel(self) -> int: ...
    def item(self) -> number: ...
    def clone(self) -> Tensor: ...
    def requires_grad_(self, requires_grad=True) -> Tensor: ...

class dtype:
    """Data type specification for tensors."""
    float32: dtype
    float64: dtype  
    int32: dtype
    int64: dtype
    bool: dtype
    uint8: dtype

class device:
    """Device specification for tensor placement."""
    def __init__(self, device): ...
    
class Size(tuple):
    """Tensor shape representation."""
    def numel(self) -> int: ...

class Generator:
    """Random number generator state."""
    def manual_seed(self, seed: int) -> Generator: ...
    def get_state(self) -> Tensor: ...
    def set_state(self, new_state: Tensor) -> Generator: ...