Algebraic Multigrid (AMG) solvers for large sparse linear systems with Python interface
—
Smoothing and relaxation methods for error correction in multigrid cycles. These methods reduce high-frequency error components and are essential for multigrid convergence.
Classical point-wise relaxation schemes for scalar problems.
def gauss_seidel(A, x, b, iterations=1, sweep='forward'):
"""
Gauss-Seidel relaxation.
Point-wise relaxation method that updates unknowns sequentially
using most recent values. Excellent smoother for many AMG problems.
Parameters:
- A: sparse matrix, coefficient matrix
- x: array, solution vector (modified in-place)
- b: array, right-hand side vector
- iterations: int, number of relaxation sweeps (default 1)
- sweep: str, sweep direction:
* 'forward': natural ordering (i = 0, 1, ..., n-1)
* 'backward': reverse ordering (i = n-1, ..., 1, 0)
* 'symmetric': forward then backward sweep
Returns:
None (x is modified in-place)
Notes:
- Updates x[i] = (b[i] - sum(A[i,j]*x[j] for j != i)) / A[i,i]
- Requires diagonal dominance for convergence
- More effective than Jacobi for many problems
"""
def jacobi(A, x, b, iterations=1, omega=1.0):
"""
Jacobi relaxation.
Point-wise relaxation using diagonal scaling. Simultaneous
updates make it naturally parallel but may converge slower
than Gauss-Seidel.
Parameters:
- A: sparse matrix, coefficient matrix
- x: array, solution vector (modified in-place)
- b: array, right-hand side vector
- iterations: int, number of relaxation sweeps
- omega: float, relaxation parameter (0 < omega <= 1):
* 1.0: standard Jacobi (default)
* 2/3: common damped choice
* optimal: 2/(lambda_max + lambda_min)
Returns:
None (x is modified in-place)
Notes:
- Updates x = x + omega * D^{-1} * (b - A*x)
- Parallel-friendly but may need damping
- Good for problems with strong diagonal dominance
"""Usage Examples:
import pyamg
import numpy as np
# Setup test problem
A = pyamg.gallery.poisson((50, 50))
b = np.random.rand(A.shape[0])
x = np.zeros_like(b)
# Gauss-Seidel relaxation
x_gs = x.copy()
pyamg.relaxation.gauss_seidel(A, x_gs, b, iterations=5)
residual_gs = np.linalg.norm(b - A*x_gs)
print(f"Gauss-Seidel residual: {residual_gs:.2e}")
# Jacobi relaxation with damping
x_jacobi = x.copy()
pyamg.relaxation.jacobi(A, x_jacobi, b, iterations=5, omega=2.0/3.0)
residual_jacobi = np.linalg.norm(b - A*x_jacobi)
print(f"Jacobi residual: {residual_jacobi:.2e}")
# Symmetric Gauss-Seidel
x_sym = x.copy()
pyamg.relaxation.gauss_seidel(A, x_sym, b, iterations=3, sweep='symmetric')Block-wise relaxation methods for systems of equations and improved robustness.
def block_gauss_seidel(A, x, b, blocksize=None, iterations=1, sweep='forward'):
"""
Block Gauss-Seidel relaxation.
Relaxation method that solves small blocks of equations exactly
rather than single equations. More robust for systems problems
and strongly coupled equations.
Parameters:
- A: sparse matrix, coefficient matrix
- x: array, solution vector (modified in-place)
- b: array, right-hand side vector
- blocksize: int or None, block size:
* None: automatic detection from matrix structure
* int: fixed block size (e.g., 2 for 2D elasticity)
- iterations: int, number of block relaxation sweeps
- sweep: str, sweep direction ('forward', 'backward', 'symmetric')
Returns:
None (x is modified in-place)
Notes:
- Solves A_ii * x_i = b_i - sum(A_ij * x_j) for each block i
- More expensive per iteration but more effective
- Essential for systems like elasticity equations
"""
def block_jacobi(A, x, b, blocksize=None, iterations=1, omega=1.0):
"""
Block Jacobi relaxation.
Block version of Jacobi relaxation that solves diagonal blocks
exactly while treating off-diagonal blocks explicitly.
Parameters:
- A: sparse matrix, coefficient matrix (preferably block structured)
- x: array, solution vector (modified in-place)
- b: array, right-hand side vector
- blocksize: int or None, block size for decomposition
- iterations: int, number of block relaxation sweeps
- omega: float, block relaxation parameter
Returns:
None (x is modified in-place)
Notes:
- More robust than point Jacobi for systems
- Naturally parallel within each block solve
- Good for problems with physical coupling
"""Usage Examples:
# Block relaxation for elasticity
A_elastic = pyamg.gallery.linear_elasticity((30, 30))
b_elastic = np.random.rand(A_elastic.shape[0])
x_elastic = np.zeros_like(b_elastic)
# Block Gauss-Seidel with 2x2 blocks (displacement components)
pyamg.relaxation.block_gauss_seidel(A_elastic, x_elastic, b_elastic,
blocksize=2, iterations=3)
# Block Jacobi
x_block_jacobi = np.zeros_like(b_elastic)
pyamg.relaxation.block_jacobi(A_elastic, x_block_jacobi, b_elastic,
blocksize=2, omega=0.8)Advanced relaxation methods for specific problem types and performance optimization.
def sor_gauss_seidel(A, x, b, omega=1.0, iterations=1, sweep='forward'):
"""
SOR (Successive Over-Relaxation) Gauss-Seidel method.
Accelerated version of Gauss-Seidel with over-relaxation parameter.
Can improve convergence rate when properly tuned.
Parameters:
- A: sparse matrix, coefficient matrix
- x: array, solution vector (modified in-place)
- b: array, right-hand side vector
- omega: float, over-relaxation parameter:
* 1.0: standard Gauss-Seidel
* 1 < omega < 2: over-relaxation (can accelerate)
* 0 < omega < 1: under-relaxation (more stable)
- iterations: int, number of SOR sweeps
- sweep: str, sweep direction
Returns:
None (x is modified in-place)
Notes:
- Optimal omega depends on spectral properties of A
- Can dramatically improve or worsen convergence
- Requires careful parameter tuning
"""
def chebyshev(A, x, b, omega=1.0, degree=1):
"""
Chebyshev polynomial relaxation.
Uses Chebyshev polynomials to accelerate basic relaxation methods.
Provides optimal acceleration when eigenvalue bounds are known.
Parameters:
- A: sparse matrix, coefficient matrix
- x: array, solution vector (modified in-place)
- b: array, right-hand side vector
- omega: float, relaxation parameter
- degree: int, Chebyshev polynomial degree
Returns:
None (x is modified in-place)
Notes:
- Requires eigenvalue estimates for optimal performance
- Higher degree can improve convergence
- More complex than basic methods but potentially more effective
"""Usage Examples:
# SOR with over-relaxation
A = pyamg.gallery.poisson((40, 40))
b = np.random.rand(A.shape[0])
x_sor = np.zeros_like(b)
pyamg.relaxation.sor_gauss_seidel(A, x_sor, b, omega=1.2, iterations=5)
# Chebyshev acceleration
x_cheby = np.zeros_like(b)
pyamg.relaxation.chebyshev(A, x_cheby, b, degree=3)Krylov subspace methods used as smoothers within multigrid cycles.
def krylov_relaxation(A, x, b, method='cg', iterations=1, **kwargs):
"""
Krylov method as relaxation smoother.
Uses truncated Krylov methods as smoothers, providing
more sophisticated error reduction than classical methods.
Parameters:
- A: sparse matrix, coefficient matrix
- x: array, solution vector (modified in-place)
- b: array, right-hand side vector
- method: str, Krylov method:
* 'cg': Conjugate Gradient (for SPD)
* 'gmres': GMRES (general matrices)
* 'bicgstab': BiCGSTAB (nonsymmetric)
- iterations: int, number of Krylov iterations
- **kwargs: Krylov method parameters
Returns:
None (x is modified in-place)
Notes:
- More expensive per iteration than classical methods
- Can be very effective for difficult problems
- Requires careful iteration count selection
"""Usage Examples:
# CG as smoother for SPD problems
A_spd = pyamg.gallery.poisson((30, 30))
b_spd = np.random.rand(A_spd.shape[0])
x_cg_smooth = np.zeros_like(b_spd)
pyamg.relaxation.krylov_relaxation(A_spd, x_cg_smooth, b_spd,
method='cg', iterations=3)# Default relaxation parameters
DEFAULT_SWEEP = 'forward'
DEFAULT_NITER = 1
SYMMETRIC_RELAXATION = ['jacobi', 'block_jacobi', 'chebyshev']
KRYLOV_RELAXATION = ['cg', 'gmres', 'bicgstab']Methods can be specified as tuples with custom parameters:
# Custom relaxation configurations
presmoother = ('gauss_seidel', {'sweep': 'forward', 'iterations': 2})
postsmoother = ('jacobi', {'omega': 0.67, 'iterations': 1})
block_smoother = ('block_gauss_seidel', {'blocksize': 2, 'iterations': 1})
# Use in solver construction
ml = pyamg.ruge_stuben_solver(A,
presmoother=presmoother,
postsmoother=postsmoother)# Comparison of relaxation effectiveness
def compare_relaxation_methods(A, b, methods):
x0 = np.zeros_like(b)
results = {}
for name, (method, params) in methods.items():
x = x0.copy()
method(A, x, b, **params)
residual = np.linalg.norm(b - A*x)
results[name] = residual
return results
# Example comparison
A = pyamg.gallery.poisson((40, 40))
b = np.random.rand(A.shape[0])
methods = {
'Gauss-Seidel': (pyamg.relaxation.gauss_seidel, {'iterations': 3}),
'Jacobi': (pyamg.relaxation.jacobi, {'iterations': 3, 'omega': 2/3}),
'Symmetric GS': (pyamg.relaxation.gauss_seidel,
{'iterations': 2, 'sweep': 'symmetric'})
}
residuals = compare_relaxation_methods(A, b, methods)
for method, residual in residuals.items():
print(f"{method}: {residual:.2e}")# Conservative (stable) parameters
conservative_jacobi = ('jacobi', {'omega': 0.5, 'iterations': 2})
conservative_gs = ('gauss_seidel', {'sweep': 'symmetric', 'iterations': 1})
# Aggressive (faster but less stable) parameters
aggressive_sor = ('sor_gauss_seidel', {'omega': 1.5, 'iterations': 1})
aggressive_jacobi = ('jacobi', {'omega': 1.0, 'iterations': 1})Install with Tessl CLI
npx tessl i tessl/pypi-pyamg