The Python ensemble sampling toolkit for affine-invariant MCMC
npx @tessl/cli install tessl/pypi-emcee@3.1.0emcee is a stable, well-tested Python implementation of the affine-invariant ensemble sampler for Markov chain Monte Carlo (MCMC) proposed by Goodman & Weare (2010). It provides a robust framework for Bayesian parameter estimation and model comparison through efficient ensemble sampling methods, specifically designed for scientific computing and statistical analysis.
pip install emceeimport emceeFor specific components:
from emcee import EnsembleSampler, State
from emcee import moves, backends, autocorrimport emcee
import numpy as np
# Define log probability function
def log_prob(theta):
# Example: 2D Gaussian
return -0.5 * np.sum(theta**2)
# Set up sampler
nwalkers = 32
ndim = 2
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob)
# Initialize walker positions
initial_state = np.random.randn(nwalkers, ndim)
# Run MCMC
sampler.run_mcmc(initial_state, nsteps=1000)
# Get results
chain = sampler.get_chain()
log_prob_samples = sampler.get_log_prob()emcee's architecture centers on ensemble-based MCMC sampling:
The ensemble approach enables efficient sampling by having walkers interact and learn from each other, making it particularly effective for complex, multimodal distributions.
Core ensemble MCMC sampling functionality with the EnsembleSampler class, supporting various initialization methods, sampling control, and result retrieval.
class EnsembleSampler:
def __init__(self, nwalkers: int, ndim: int, log_prob_fn: callable,
pool=None, moves=None, args=None, kwargs=None,
backend=None, vectorize: bool = False, blobs_dtype=None,
parameter_names=None): ...
def run_mcmc(self, initial_state, nsteps: int, **kwargs): ...
def sample(self, initial_state, iterations: int = 1, **kwargs): ...
def get_chain(self, **kwargs): ...
def get_log_prob(self, **kwargs): ...
def get_autocorr_time(self, **kwargs): ...Comprehensive collection of proposal move algorithms for generating new walker positions, including stretch moves, differential evolution, kernel density estimation, and Metropolis-Hastings variants.
class StretchMove:
def __init__(self, a: float = 2.0): ...
class DEMove:
def __init__(self, sigma: float = 1e-5, gamma0: float = None): ...
class KDEMove:
def __init__(self, bw_method=None): ...Flexible storage systems for persisting MCMC chains, supporting both in-memory and file-based backends with features like compression, chunking, and resumable sampling.
class Backend:
def __init__(self, dtype=None): ...
def get_chain(self, **kwargs): ...
def get_log_prob(self, **kwargs): ...
class HDFBackend(Backend):
def __init__(self, filename: str, name: str = "mcmc", read_only: bool = False): ...Statistical tools for assessing MCMC chain convergence through autocorrelation analysis, including integrated autocorrelation time estimation and diagnostic functions.
def integrated_time(x, c: int = 5, tol: int = 50, quiet: bool = False,
has_walkers: bool = True): ...
def function_1d(x): ...
class AutocorrError(Exception): ...Walker state representation and manipulation, providing a unified interface for handling walker positions, log probabilities, metadata blobs, and random number generator states.
class State:
def __init__(self, coords, log_prob=None, blobs=None, random_state=None,
copy: bool = False): ...
coords: np.ndarray
log_prob: np.ndarray
blobs: any
random_state: any# Core state representation
class State:
coords: np.ndarray # Walker positions [nwalkers, ndim]
log_prob: np.ndarray # Log probabilities [nwalkers]
blobs: any # Metadata blobs
random_state: any # Random number generator state
# Exception for autocorrelation analysis
class AutocorrError(Exception):
pass
# Model representation (internal)
from collections import namedtuple
Model = namedtuple("Model", ["log_prob_fn", "compute_log_prob_fn", "map_fn", "random"])