or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

core-distributed.mddata-processing.mddistributed-training.mdhyperparameter-tuning.mdindex.mdmodel-serving.mdreinforcement-learning.mdutilities-advanced.md
tile.json

tessl/pypi-ray

Ray is a unified framework for scaling AI and Python applications.

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/ray@2.49.x

To install, run

npx @tessl/cli install tessl/pypi-ray@2.49.0

index.mddocs/

Ray

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI libraries for simplifying ML compute. It enables parallel and distributed execution of Python code with minimal changes, providing libraries for data processing, training, hyperparameter tuning, reinforcement learning, and model serving.

Package Information

  • Package Name: ray
  • Language: Python
  • Installation: pip install ray
  • Documentation: https://docs.ray.io/

Core Imports

import ray

For specific libraries:

import ray.data
import ray.train
import ray.tune
import ray.serve

Basic Usage

import ray

# Initialize Ray
ray.init()

# Define a remote function
@ray.remote
def compute_something(x):
    return x * x

# Execute remotely and get object reference
future = compute_something.remote(4)

# Get the result
result = ray.get(future)
print(result)  # 16

# Define a remote class (Actor)
@ray.remote
class Counter:
    def __init__(self):
        self.count = 0
    
    def increment(self):
        self.count += 1
        return self.count

# Create and use an actor
counter = Counter.remote()
result = ray.get(counter.increment.remote())
print(result)  # 1

# Shutdown Ray
ray.shutdown()

Architecture

Ray's architecture consists of:

  • Core Runtime: Distributed task execution engine with actors, tasks, and object store
  • Ray Data: Distributed data processing for ML workloads
  • Ray Train: Distributed training with multi-framework support (PyTorch, TensorFlow, XGBoost)
  • Ray Tune: Hyperparameter tuning and experiment management
  • Ray Serve: Scalable model serving and application deployment
  • Ray RLlib: Reinforcement learning library
  • Ray AIR: Unified ML workflows combining data, train, tune, and serve

Capabilities

Core Distributed Computing

Core Ray functionality for distributed task execution, actor management, and object storage. Includes initialization, remote execution, data management, and cluster utilities.

def init(address=None, **kwargs): ...
def get(object_refs, timeout=None): ...
def put(value, **kwargs): ...
def remote(num_cpus=None, num_gpus=None, **kwargs): ...
def wait(object_refs, num_returns=1, timeout=None): ...
def shutdown(): ...
def show_in_dashboard(message: str, key: str = "", dtype: str = "text"): ...
def cpp_function(function_name: str): ...
def java_function(class_name: str, function_name: str): ...
def java_actor_class(class_name: str): ...

Core Distributed Computing

Data Processing

Distributed data processing capabilities for ML workloads. Provides datasets, transformations, and integrations with ML frameworks and storage systems.

class Dataset:
    def map(self, fn, **kwargs): ...
    def filter(self, fn, **kwargs): ...
    def groupby(self, key): ...
    def to_torch(self, **kwargs): ...

def read_parquet(paths, **kwargs): ...
def read_csv(paths, **kwargs): ...
def read_json(paths, **kwargs): ...
def read_bigquery(query, **kwargs): ...
def read_delta(table_uri, **kwargs): ...
def read_mongo(uri, database, collection, **kwargs): ...
def read_tfrecords(paths, **kwargs): ...

Data Processing

Distributed Training

Distributed training for machine learning with support for PyTorch, TensorFlow, XGBoost, and other frameworks. Includes fault-tolerant training and automatic scaling.

class Trainer:
    def fit(self, dataset=None): ...
    def predict(self, dataset): ...

class TorchTrainer(Trainer): ...
class TensorflowTrainer(Trainer): ...
class XGBoostTrainer(Trainer): ...

Distributed Training

Hyperparameter Tuning

Comprehensive hyperparameter optimization with multiple search algorithms, schedulers, and experiment management. Supports all major ML frameworks.

class Tuner:
    def fit(self): ...
    def get_results(self): ...

def tune_config(metric, mode, **kwargs): ...
class GridSearch: ...
class RandomSearch: ...
class HyperOptSearch: ...

Hyperparameter Tuning

Model Serving

Scalable model serving and application deployment with automatic scaling, batching, and multi-model support.

@serve.deployment
class ModelDeployment: ...

def start(detached=False, http_options=None): ...
def run(target, **kwargs): ...
def shutdown(): ...

Model Serving

Reinforcement Learning

Reinforcement learning algorithms and environments with support for distributed training and various RL frameworks.

class Policy:
    def compute_actions(self, obs_batch): ...
    def learn_on_batch(self, samples): ...

class Algorithm:
    def train(self): ...
    def evaluate(self): ...

Reinforcement Learning

Utilities and Advanced Features

Utility functions, placement groups, debugging tools, actor pools, and advanced distributed computing features.

class PlacementGroup:
    def ready(self): ...

def placement_group(bundles, strategy="PACK"): ...
def get_placement_group(name): ...
class ActorPool: ...
def init_collective_group(world_size, rank, backend="nccl"): ...
def allreduce(tensor, group_name="default", op="SUM"): ...
def broadcast(tensor, src_rank, group_name="default"): ...

Utilities and Advanced Features

Types

# Core ID Types
class ObjectRef: ...
class ObjectRefGenerator: ...
class DynamicObjectRefGenerator: ...
class ActorID: ...
class TaskID: ...
class JobID: ...
class NodeID: ...
class PlacementGroupID: ...
class ClusterID: ...

# Runtime Types
class LoggingConfig:
    def __init__(self, encoding="TEXT", log_level="INFO"): ...

# Language Support
class Language:
    PYTHON = "PYTHON"
    JAVA = "JAVA"
    CPP = "CPP"