CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-pymc3

Probabilistic Programming in Python: Bayesian Modeling and Probabilistic Machine Learning with Theano

68

0.94x
Overview
Eval results
Files

math-functions.mddocs/

Mathematical Functions

PyMC3 provides a comprehensive set of mathematical functions built on Theano for tensor operations, link functions, and specialized mathematical computations commonly used in Bayesian modeling. These functions support automatic differentiation and efficient computation.

Capabilities

Link Functions

Functions for transforming between constrained and unconstrained parameter spaces.

def logit(p):
    """
    Logit (log-odds) transformation for probabilities.
    
    Transforms probability values to the real line using logit(p) = log(p/(1-p)).
    
    Parameters:
    - p: tensor, probability values in (0, 1)
    
    Returns:
    - tensor: logit-transformed values on real line
    """

def invlogit(x, eps=None):
    """
    Inverse logit (sigmoid) transformation.
    
    Transforms real values to probabilities using invlogit(x) = 1/(1+exp(-x)).
    
    Parameters:
    - x: tensor, real-valued input
    - eps: float, epsilon for numerical stability
    
    Returns:
    - tensor: probability values in (0, 1)
    """

def probit(p):
    """
    Probit transformation for probabilities.
    
    Transforms probability values using inverse normal CDF.
    
    Parameters:
    - p: tensor, probability values in (0, 1)
    
    Returns:
    - tensor: probit-transformed values
    """

def invprobit(x):
    """
    Inverse probit transformation.
    
    Transforms real values to probabilities using normal CDF.
    
    Parameters:
    - x: tensor, real-valued input
    
    Returns:
    - tensor: probability values in (0, 1)
    """

Log-Space Operations

Numerically stable operations in log space for probability computations.

def logsumexp(x, axis=None, keepdims=True):
    """
    Numerically stable log-sum-exp operation.
    
    Computes log(sum(exp(x))) in a numerically stable way by factoring
    out the maximum value before exponentiation.
    
    Parameters:
    - x: tensor, input values
    - axis: int or tuple, axis along which to sum
    - keepdims: bool, keep reduced dimensions
    
    Returns:
    - tensor: log-sum-exp result
    """

def logaddexp(a, b):
    """
    Numerically stable log(exp(a) + exp(b)).
    
    Parameters:
    - a: tensor, first input
    - b: tensor, second input
    
    Returns:
    - tensor: log(exp(a) + exp(b))
    """

def logdiffexp(a, b):
    """
    Numerically stable log(exp(a) - exp(b)) for a > b.
    
    Parameters:
    - a: tensor, larger input (a > b)
    - b: tensor, smaller input
    
    Returns:
    - tensor: log(exp(a) - exp(b))
    """

def log1pexp(x):
    """
    Numerically stable log(1 + exp(x)).
    
    Uses different computational strategies depending on input range
    to maintain numerical stability.
    
    Parameters:
    - x: tensor, input values
    
    Returns:
    - tensor: log(1 + exp(x))
    """

def log1mexp(x):
    """
    Numerically stable log(1 - exp(x)) for x < 0.
    
    Parameters:
    - x: tensor, input values (should be < 0)
    
    Returns:
    - tensor: log(1 - exp(x))
    """

Special Functions

Specialized mathematical functions for Bayesian modeling.

def logbern(log_p):
    """
    Log-Bernoulli sampling from log probabilities.
    
    Samples binary values from Bernoulli distribution given log probabilities,
    useful for categorical and discrete choice models.
    
    Parameters:
    - log_p: tensor, log probabilities
    
    Returns:
    - tensor: binary samples
    """

def expand_packed_triangular(n, packed, lower=True, diagonal_only=False):
    """
    Expand packed triangular matrix to full matrix.
    
    Converts packed lower or upper triangular representation to full
    symmetric matrix, commonly used for covariance matrices.
    
    Parameters:
    - n: int, matrix dimension
    - packed: tensor, packed triangular values
    - lower: bool, expand as lower triangular
    - diagonal_only: bool, expand only diagonal elements
    
    Returns:
    - tensor: full symmetric matrix
    """

def flatten_list(tensors):
    """
    Flatten list of tensors into single vector.
    
    Concatenates multiple tensors after flattening each to 1D,
    useful for parameter vectorization.
    
    Parameters:
    - tensors: list, tensors to flatten and concatenate
    
    Returns:
    - tensor: flattened concatenated vector
    """

Matrix Operations

Specialized matrix operations for multivariate distributions and linear algebra.

def kronecker(*Ks):
    """
    Kronecker product of multiple matrices.
    
    Computes kronecker product K1 ⊗ K2 ⊗ ... ⊗ Kn for efficient
    computation of multivariate and matrix-normal distributions.
    
    Parameters:
    - Ks: matrices, input matrices for kronecker product
    
    Returns:
    - tensor: kronecker product matrix
    """

def kron_diag(*diags):
    """
    Kronecker product of diagonal matrices.
    
    Efficient computation of kronecker product when all inputs
    are diagonal matrices.
    
    Parameters:
    - diags: tensors, diagonal elements of matrices
    
    Returns:
    - tensor: diagonal of resulting kronecker product
    """

def batched_diag(C):
    """
    Extract diagonal elements from batched matrices.
    
    Extracts diagonal elements from a batch of matrices efficiently.
    
    Parameters:
    - C: tensor, batch of matrices (..., n, n)
    
    Returns:
    - tensor: batch of diagonal elements (..., n)
    """

def block_diagonal(matrices, sparse=False, format='csr'):
    """
    Create block diagonal matrix from list of matrices.
    
    Constructs block diagonal matrix with input matrices on diagonal
    and zeros elsewhere.
    
    Parameters:
    - matrices: list, matrices to place on diagonal
    - sparse: bool, return sparse matrix
    - format: str, sparse matrix format if sparse=True
    
    Returns:
    - tensor: block diagonal matrix
    """

def cartesian(*arrays):
    """
    Cartesian product of input arrays.
    
    Generates cartesian product of input arrays, useful for
    creating parameter grids and design matrices.
    
    Parameters:
    - arrays: arrays to compute cartesian product of
    
    Returns:
    - tensor: cartesian product array
    """

def flat_outer(a, b):
    """
    Flattened outer product of two vectors.
    
    Computes outer product and flattens result to 1D vector.
    
    Parameters:
    - a: tensor, first input vector
    - b: tensor, second input vector
    
    Returns:
    - tensor: flattened outer product
    """

Utility Functions

General utility functions for tensor operations and numerical computations.

def tround(*args, **kwargs):
    """
    Theano-compatible rounding function.
    
    Rounds tensor values to nearest integer with proper gradient handling.
    
    Parameters:
    - args: positional arguments for rounding
    - kwargs: keyword arguments for rounding
    
    Returns:
    - tensor: rounded values
    """

Usage Examples

Link Function Transformations

import pymc3 as pm
import numpy as np

with pm.Model() as model:
    # Unconstrained parameter
    alpha_raw = pm.Normal('alpha_raw', 0, 1)
    
    # Transform to probability using inverse logit
    alpha = pm.math.invlogit(alpha_raw)
    
    # Use in Bernoulli likelihood
    y = pm.Bernoulli('y', p=alpha, observed=data)
    
    trace = pm.sample(1000)

# Manual transformation
prob_values = np.array([0.1, 0.5, 0.9])
logit_values = pm.math.logit(prob_values)
back_to_prob = pm.math.invlogit(logit_values)

Numerically Stable Log Operations

import pymc3 as pm
import numpy as np

# Log-sum-exp for mixture models
with pm.Model() as model:
    # Mixture weights (in log space)
    log_weights = pm.Dirichlet('log_weights', a=[1, 1, 1]).log()
    
    # Component means
    mu = pm.Normal('mu', 0, 1, shape=3)
    
    # Log likelihoods for each component
    log_likes = pm.Normal.dist(mu, 1).logp(data[:, None])
    
    # Mixture log likelihood using logsumexp
    mixture_logp = pm.math.logsumexp(log_weights + log_likes, axis=1)
    
    # Custom likelihood
    pm.Potential('mixture', mixture_logp.sum())
    
    trace = pm.sample(1000)

Matrix Operations for Multivariate Models

import pymc3 as pm
import numpy as np

# Kronecker-structured covariance
with pm.Model() as model:
    # Row and column covariance matrices
    Sigma_row = pm.InverseWishart('Sigma_row', nu=5, V=np.eye(3))
    Sigma_col = pm.InverseWishart('Sigma_col', nu=4, V=np.eye(2))
    
    # Kronecker product covariance
    Sigma = pm.math.kronecker(Sigma_row, Sigma_col)
    
    # Matrix normal distribution
    mu = pm.Normal('mu', 0, 1, shape=(3, 2))
    X = pm.MvNormal('X', 
                    mu=mu.flatten(), 
                    cov=Sigma, 
                    observed=data.flatten())
    
    trace = pm.sample(1000)

# Block diagonal covariance
with pm.Model() as model:
    # Individual covariance blocks
    Sigma1 = pm.InverseWishart('Sigma1', nu=3, V=np.eye(2))
    Sigma2 = pm.InverseWishart('Sigma2', nu=4, V=np.eye(3))
    
    # Block diagonal structure
    Sigma = pm.math.block_diagonal([Sigma1, Sigma2])
    
    # Multivariate normal with block structure
    mu = pm.Normal('mu', 0, 1, shape=5)
    y = pm.MvNormal('y', mu=mu, cov=Sigma, observed=data)
    
    trace = pm.sample(1000)

Packed Triangular Matrices

import pymc3 as pm
import numpy as np

# Cholesky parameterization
with pm.Model() as model:
    # Packed lower triangular elements
    n_dim = 3
    n_packed = n_dim * (n_dim + 1) // 2
    
    packed_L = pm.Normal('packed_L', 0, 1, shape=n_packed)
    
    # Expand to full Cholesky factor
    L = pm.math.expand_packed_triangular(n_dim, packed_L, lower=True)
    
    # Covariance matrix
    Sigma = pm.math.dot(L, L.T)
    
    # Multivariate normal
    mu = pm.Normal('mu', 0, 1, shape=n_dim)
    y = pm.MvNormal('y', mu=mu, chol=L, observed=data)
    
    trace = pm.sample(1000)

Install with Tessl CLI

npx tessl i tessl/pypi-pymc3

docs

data-handling.md

distributions.md

gaussian-processes.md

glm.md

index.md

math-functions.md

modeling.md

sampling.md

stats-plots.md

step-methods.md

variational.md

tile.json