or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

core-samplers.mdindex.mdml-friends-regions.mdplotting.mdstep-samplers.mdutilities.md
tile.json

tessl/pypi-ultranest

Fit and compare complex models reliably and rapidly with advanced nested sampling techniques for Bayesian inference.

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/ultranest@4.4.x

To install, run

npx @tessl/cli install tessl/pypi-ultranest@4.4.0

index.mddocs/

UltraNest

UltraNest is a Python library that implements advanced nested sampling techniques for Bayesian inference. It enables scientists and researchers to fit complex models to data, constrain model parameters through posterior probability distributions, and compare different models using Bayesian evidence calculations. The library specializes in Monte Carlo nested sampling methods that are particularly effective for high-dimensional parameter spaces and complex likelihood functions.

Package Information

  • Package Name: ultranest
  • Language: Python
  • Installation: pip install ultranest

Core Imports

import ultranest
from ultranest import ReactiveNestedSampler, NestedSampler

Common imports for sampling:

from ultranest import ReactiveNestedSampler, read_file, vectorize, warmstart_from_similar_file

Basic Usage

import numpy as np
from ultranest import ReactiveNestedSampler

# Define parameter names
param_names = ['x', 'y', 'z']

# Define log-likelihood function
def loglike(theta):
    """Log-likelihood function for your model"""
    x, y, z = theta
    # Your likelihood calculation here
    return -0.5 * (x**2 + y**2 + z**2)

# Define parameter transformation (unit cube to physical parameters)
def prior_transform(cube):
    """Transform from unit cube [0,1] to parameter space"""
    params = cube.copy()
    # Transform each parameter, e.g., uniform [-10, 10]
    params[0] = cube[0] * 20 - 10  # x
    params[1] = cube[1] * 20 - 10  # y  
    params[2] = cube[2] * 20 - 10  # z
    return params

# Create and run sampler
sampler = ReactiveNestedSampler(
    param_names, 
    loglike, 
    transform=prior_transform
)

# Run the sampling
result = sampler.run()

# Print results
sampler.print_results()

# Access results
print("Evidence (log Z):", result['logz'])
print("Parameter estimates:", result['posterior']['mean'])

Architecture

UltraNest's modular architecture enables flexible Bayesian inference workflows:

  • Core Samplers: ReactiveNestedSampler (adaptive) and NestedSampler (traditional) provide the main nested sampling algorithms
  • Step Samplers: Modular sampling techniques (Metropolis-Hastings, Slice Sampling, HMC) for efficient parameter space exploration
  • ML Friends Regions: Machine learning-based region definitions and transformations for constraining parameter spaces
  • Storage Backends: Flexible data storage options (HDF5, CSV, TSV) for results persistence
  • Visualization: Built-in plotting capabilities for posterior analysis and diagnostics

This design allows researchers to customize every aspect of the sampling process while maintaining ease of use for standard applications.

Capabilities

Core Nested Samplers

Main nested sampling implementations for Bayesian parameter estimation and model comparison. ReactiveNestedSampler provides adaptive sampling ideal for most applications, while NestedSampler offers traditional nested sampling with fixed live points.

class ReactiveNestedSampler:
    def __init__(
        self,
        param_names: list,
        loglike: callable,
        transform: callable = None,
        derived_param_names: list = [],
        wrapped_params: list = None,
        resume: str = 'subfolder',
        run_num: int = None,
        log_dir: str = None,
        num_test_samples: int = 2,
        draw_multiple: bool = True,
        num_bootstraps: int = 30,
        vectorized: bool = False,
        ndraw_min: int = 128,
        ndraw_max: int = 65536,
        storage_backend: str = 'hdf5',
        warmstart_max_tau: float = -1
    ): ...
    
    def run(self, **kwargs): ...
    def run_iter(self, **kwargs): ...
    def print_results(self, use_unicode: bool = True): ...
    def plot(self): ...
    def plot_corner(self): ...
    def plot_trace(self): ...
    def plot_run(self): ...
    def store_tree(self): ...
    
    @property
    def results(self): ...
    @property
    def stepsampler(self): ...

class NestedSampler:
    def __init__(
        self,
        param_names: list,
        loglike: callable,
        transform: callable = None,
        derived_param_names: list = [],
        resume: str = 'subfolder',
        run_num: int = None,
        log_dir: str = 'logs/test',
        num_live_points: int = 1000,
        vectorized: bool = False,
        wrapped_params: list = []
    ): ...
    
    def run(self, **kwargs): ...
    def print_results(self): ...
    def plot(self): ...

Core Samplers

Step Sampling Techniques

Modular step sampling algorithms for efficient exploration of parameter spaces within nested sampling. Includes Metropolis-Hastings, slice sampling, and specialized direction generators for different problem geometries.

class StepSampler: ...
class MHSampler(StepSampler): ...
class SliceSampler(StepSampler): ...

def CubeMHSampler(*args, **kwargs): ...
def RegionMHSampler(*args, **kwargs): ...
def CubeSliceSampler(*args, **kwargs): ...
def RegionSliceSampler(*args, **kwargs): ...
def BallSliceSampler(*args, **kwargs): ...
def RegionBallSliceSampler(*args, **kwargs): ...

def generate_random_direction(ui, region, scale=1): ...
def generate_cube_oriented_direction(ui, region, scale=1): ...
def generate_region_oriented_direction(ui, region, scale=1): ...

Step Samplers

ML Friends Regions

Machine learning-based region definitions and parameter transformations for constraining nested sampling. Provides efficient methods for defining likelihood regions and applying transformations to improve sampling efficiency.

class MLFriends: ...
class RobustEllipsoidRegion(MLFriends): ...
class SimpleRegion(RobustEllipsoidRegion): ...
class WrappingEllipsoid: ...

class ScalingLayer: ...
class AffineLayer(ScalingLayer): ...
class LocalAffineLayer(AffineLayer): ...

ML Friends Regions

Plotting and Visualization

Built-in plotting capabilities for posterior analysis, convergence diagnostics, and model comparison. Generate corner plots, trace plots, and run diagnostics to interpret nested sampling results.

def cornerplot(results, **kwargs): ...
def runplot(results, **kwargs): ...
def traceplot(results, **kwargs): ...

class PredictionBand: ...

Plotting

Utilities and File I/O

Essential utilities for data processing, file operations, and integration with other tools. Includes vectorization helpers, logging setup, and compatibility layers.

def read_file(log_dir: str, x_dim: int, **kwargs): ...
def warmstart_from_similar_file(
    usample_filename: str,
    param_names: list,
    loglike: callable,
    transform: callable,
    vectorized: bool = False,
    min_num_samples: int = 50
): ...
def vectorize(function: callable): ...
def create_logger(module_name: str, log_dir: str = None, level=logging.INFO): ...
def resample_equal(samples, weights, rstate=None): ...
def quantile(x, q, weights=None): ...

Utilities

Configuration Options

Storage Backends

  • 'hdf5': HDF5 file storage (recommended for performance)
  • 'tsv': Tab-separated values (human-readable)
  • 'csv': Comma-separated values (compatible)

Resume Strategies

  • 'resume': Continue from existing run
  • 'resume-similar': Resume from similar run configuration
  • 'overwrite': Start fresh, overwriting existing files
  • 'subfolder': Create new subfolder for each run

Error Handling

UltraNest functions may raise various exceptions:

  • ValueError: Invalid parameter values or configurations
  • FileNotFoundError: Missing required files during resume operations
  • MemoryError: Insufficient memory for large parameter spaces
  • RuntimeError: Sampling convergence issues or numerical errors

Always wrap sampler creation and execution in appropriate exception handling for robust analysis pipelines.