CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-ultranest

Fit and compare complex models reliably and rapidly with advanced nested sampling techniques for Bayesian inference.

Overview
Eval results
Files

utilities.mddocs/

Utilities and File I/O

Essential utilities for data processing, file operations, and integration with other tools. UltraNest provides comprehensive support functions for nested sampling workflows, data manipulation, and compatibility with external analysis packages.

Capabilities

File Input/Output

Functions for reading and writing nested sampling results and intermediate data.

def read_file(
    log_dir: str,
    x_dim: int,
    num_bootstraps: int = 20,
    random: bool = True,
    verbose: bool = False,
    check_insertion_order: bool = True
):
    """
    Read UltraNest output files from a completed run.
    
    Parameters:
    - log_dir (str): Directory containing output files
    - x_dim (int): Dimensionality of parameter space
    - num_bootstraps (int): Number of bootstraps for estimating logZ
    - random (bool): Use randomization for volume estimation
    - verbose (bool): Show progress during reading
    - check_insertion_order (bool): Perform MWW insertion order test for convergence assessment
    
    Returns:
    dict: Results dictionary containing:
        - 'samples': Parameter samples
        - 'loglikelihood': Log-likelihood values
        - 'weights': Sample weights
        - 'logz': Evidence estimate
        - 'logzerr': Evidence uncertainty
        - 'information': Information content
        - 'posterior': Posterior statistics
    """

File Reading Usage

from ultranest import read_file

# Read results from previous run
results = read_file(
    log_dir='logs/my_analysis/',
    x_dim=3,  # Three parameters
    num_bootstraps=20,
    verbose=True
)

print(f"Evidence: {results['logz']:.2f} ± {results['logzerr']:.2f}")
print(f"Information: {results['information']:.2f} nats")

Warm Starting

Resume sampling from similar previous runs to improve efficiency.

def warmstart_from_similar_file(
    usample_filename: str,
    param_names: list,
    loglike: callable,
    transform: callable,
    vectorized: bool = False,
    min_num_samples: int = 50
):
    """
    Initialize sampling using similar previous run as starting point.
    
    Parameters:
    - usample_filename (str): Path to file containing weighted posterior samples
    - param_names (list): Names of parameters being sampled
    - loglike (callable): New log-likelihood function
    - transform (callable): New transform function
    - vectorized (bool): Whether functions accept multiple points
    - min_num_samples (int): Minimum number of samples required
    
    Returns:
    tuple: (aux_param_names, aux_loglike, aux_transform, vectorized)
        Components for auxiliary sampler initialization
    """

Warm Start Usage

from ultranest import warmstart_from_similar_file, ReactiveNestedSampler

# Warm start from previous run
aux_paramnames, aux_loglike, aux_transform, vectorized = warmstart_from_similar_file(
    'previous_run/chains/weighted_post_untransformed.txt',
    param_names,
    new_loglike,
    new_transform
)

# Create auxiliary sampler
aux_sampler = ReactiveNestedSampler(
    aux_paramnames, 
    aux_loglike, 
    transform=aux_transform,
    vectorized=vectorized
)

# Run auxiliary sampling for warm start
aux_results = aux_sampler.run()

Function Vectorization

Utilities for optimizing likelihood function performance through vectorization.

def vectorize(function: callable):
    """
    Vectorize user functions for improved performance with multiple evaluations.
    
    Parameters:
    - function (callable): Function to vectorize, should accept single parameter array
    
    Returns:
    callable: Vectorized function that accepts multiple parameter arrays
    
    Usage:
    Original function: f(theta) -> scalar
    Vectorized function: f_vec(theta_array) -> array
    """

Vectorization Usage

from ultranest import vectorize
import numpy as np

# Original likelihood function
def loglike(theta):
    x, y, z = theta
    return -0.5 * (x**2 + y**2 + z**2)

# Vectorize for performance
vectorized_loglike = vectorize(loglike)

# Can now process multiple points efficiently
theta_array = np.random.randn(100, 3)  # 100 parameter sets
loglike_values = vectorized_loglike(theta_array)  # 100 likelihood values

Logging and Directory Management

Tools for organizing analysis runs and managing output files.

def create_logger(
    module_name: str,
    log_dir: str = None,
    level=logging.INFO
):
    """
    Create logger for UltraNest analysis with appropriate formatting.
    
    Parameters:
    - module_name (str): Name of module for logger identification
    - log_dir (str, optional): Directory for log files
    - level: Logging level (DEBUG, INFO, WARNING, ERROR)
    
    Returns:
    logging.Logger: Configured logger instance
    """

def make_run_dir(
    log_dir: str,
    run_num: int = None,
    **kwargs
):
    """
    Create directory structure for analysis run with appropriate naming.
    
    Parameters:
    - log_dir (str): Base directory for runs
    - run_num (int, optional): Specific run number
    - **kwargs: Additional directory creation options
    
    Returns:
    str: Path to created run directory
    """

Logging Usage

import logging
from ultranest.utils import create_logger, make_run_dir

# Set up logging for analysis
logger = create_logger('my_analysis', log_dir='logs/', level=logging.INFO)

# Create organized run directory
run_dir = make_run_dir('logs/', run_num=1)
logger.info(f"Created run directory: {run_dir}")

# Use logger throughout analysis
logger.info("Starting nested sampling analysis")
logger.warning("Convergence slow, consider increasing live points")

Statistical Utilities

Functions for statistical analysis and data manipulation of nested sampling results.

def resample_equal(samples, weights, rstate=None):
    """
    Resample weighted samples to create equal-weight posterior samples.
    
    Parameters:
    - samples (array): Weighted samples, shape (n_samples, n_params)
    - weights (array): Sample weights, shape (n_samples,)
    - rstate (RandomState, optional): Random number generator state
    
    Returns:
    array: Equal-weight samples
    """

def quantile(x, q, weights=None):
    """
    Compute weighted quantiles from sample array.
    
    Parameters:
    - x (array): Sample values
    - q (float or array): Quantile(s) to compute (0-1)
    - weights (array, optional): Sample weights
    
    Returns:
    float or array: Quantile value(s)
    """

Statistical Analysis Usage

import numpy as np
from ultranest.utils import resample_equal, quantile

# Extract weighted samples from results
samples = results['samples']
weights = results['weights']

# Create equal-weight samples for analysis
equal_samples = resample_equal(samples, weights)

# Compute statistics
for i, param_name in enumerate(['x', 'y', 'z']):
    param_samples = samples[:, i]
    
    # Weighted quantiles
    median = quantile(param_samples, 0.5, weights=weights)
    q16 = quantile(param_samples, 0.16, weights=weights)
    q84 = quantile(param_samples, 0.84, weights=weights)
    
    print(f"{param_name}: {median:.3f} +{q84-median:.3f} -{median-q16:.3f}")

Mathematical Utilities

Mathematical functions for nested sampling analysis and geometric calculations.

def vol_prefactor(n: int):
    """
    Volume prefactor for n-dimensional unit sphere.
    
    Parameters:
    - n (int): Dimensionality
    
    Returns:
    float: Volume prefactor V_n = π^(n/2) / Γ(n/2 + 1)
    """

def is_affine_transform(a, b):
    """
    Check if transformation from a to b is affine.
    
    Parameters:
    - a (array): Input points
    - b (array): Transformed points
    
    Returns:
    bool: True if transformation is affine
    """

def normalised_kendall_tau_distance(**kwargs):
    """
    Compute normalized Kendall tau distance for rank correlation analysis.
    
    Returns:
    float: Normalized Kendall tau distance
    """

Validation and Testing

Tools for validating implementations and testing numerical accuracy.

def verify_gradient(**kwargs):
    """
    Verify gradient implementations using finite differences.
    
    Useful for validating custom likelihood gradients for HMC sampling.
    
    Returns:
    dict: Validation results and error estimates
    """

Parallel Processing

Utilities for distributed computing and parallel processing workflows.

def distributed_work_chunk_size(**kwargs):
    """
    Determine optimal work chunk size for distributed MPI processing.
    
    Returns:
    int: Recommended chunk size for current MPI configuration
    """

def listify(*args):
    """
    Convert arguments to lists for consistent processing.
    
    Parameters:
    - *args: Arguments to convert to list format
    
    Returns:
    list: Arguments converted to list format
    """

Storage Backends

Classes for data persistence and file management during nested sampling runs.

class NullPointStore:
    """
    No-op storage implementation for testing and benchmarking.
    
    Provides storage interface without actual file operations,
    useful for performance testing and dry runs.
    """

class TextPointStore:
    """
    Text-based storage using CSV/TSV formats.
    
    Human-readable storage format suitable for small analyses
    and debugging. Uses tab-separated or comma-separated values.
    """

class HDF5PointStore:
    """
    HDF5-based storage for high-performance applications.
    
    Recommended storage backend for production use. Provides
    efficient binary storage with compression and fast access.
    """

Compatibility Layer

Functions for integration with other nested sampling and Bayesian analysis tools.

def pymultinest_solve_compat(**kwargs):
    """
    Drop-in replacement for PyMultiNest's solve function.
    
    Provides compatibility interface for existing PyMultiNest workflows,
    allowing easy migration to UltraNest with minimal code changes.
    
    Parameters:
    Similar to PyMultiNest.solve() interface
    
    Returns:
    Results in PyMultiNest-compatible format
    """

Compatibility Usage

from ultranest.solvecompat import pymultinest_solve_compat

# Drop-in replacement for existing PyMultiNest code
# pymultinest.solve(...) becomes:
result = pymultinest_solve_compat(
    LogLikelihood=loglike,
    Prior=prior_transform,
    n_dims=3,
    n_live_points=1000,
    outputfiles_basename='chains/analysis_'
)

Advanced Utility Usage

Custom Analysis Pipeline

import logging
from ultranest import ReactiveNestedSampler
from ultranest.utils import (
    create_logger, make_run_dir, vectorize,
    resample_equal, quantile
)

# Set up analysis infrastructure
logger = create_logger('bayesian_analysis', level=logging.INFO)
run_dir = make_run_dir('analyses/', run_num=None)  # Auto-increment

# Optimize likelihood function
@vectorize
def optimized_loglike(theta):
    """Vectorized likelihood for better performance"""
    # Your likelihood calculation
    return -0.5 * np.sum(theta**2, axis=-1)

# Run analysis
logger.info("Starting nested sampling")
sampler = ReactiveNestedSampler(
    param_names=['x', 'y', 'z'],
    loglike=optimized_loglike,
    transform=prior_transform,
    log_dir=run_dir,
    vectorized=True
)

results = sampler.run()
logger.info(f"Analysis complete. Evidence: {results['logz']:.2f}")

# Post-processing
samples = results['samples']
weights = results['weights']

# Statistical analysis
equal_samples = resample_equal(samples, weights)
medians = [quantile(equal_samples[:, i], 0.5) for i in range(3)]
logger.info(f"Parameter medians: {medians}")

# Save processed results
import pickle
with open(f"{run_dir}/processed_results.pkl", 'wb') as f:
    pickle.dump({
        'results': results,
        'equal_samples': equal_samples,
        'medians': medians
    }, f)

Integration with External Tools

# Convert to getdist format (for plotting with getdist)
try:
    import getdist
    from getdist import MCSamples
    
    # Create getdist samples object
    gd_samples = MCSamples(
        samples=equal_samples,
        names=['x', 'y', 'z'],
        labels=['X', 'Y', 'Z']
    )
    
    # Use getdist plotting
    g = getdist.plots.get_subplot_plotter()
    g.triangle_plot(gd_samples, filled=True)
    
except ImportError:
    logger.warning("getdist not available, using UltraNest plotting")
    from ultranest.plot import cornerplot
    cornerplot(results)

Memory and Performance Optimization

import numpy as np
from ultranest.utils import vol_prefactor

# Optimize for high-dimensional problems
n_dims = 50
logger.info(f"N-sphere volume prefactor: {vol_prefactor(n_dims)}")

# Use appropriate data types for memory efficiency
samples_float32 = results['samples'].astype(np.float32)
logger.info(f"Memory saved: {samples.nbytes - samples_float32.nbytes} bytes")

# Chunk processing for large datasets
chunk_size = 10000
n_samples = len(equal_samples)

for i in range(0, n_samples, chunk_size):
    chunk = equal_samples[i:i+chunk_size]
    # Process chunk
    logger.info(f"Processed chunk {i//chunk_size + 1}/{(n_samples-1)//chunk_size + 1}")

The utilities module provides essential infrastructure for robust nested sampling workflows, from basic file operations to advanced statistical analysis and integration with external tools.

Install with Tessl CLI

npx tessl i tessl/pypi-ultranest

docs

core-samplers.md

index.md

ml-friends-regions.md

plotting.md

step-samplers.md

utilities.md

tile.json