CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-cupy-cuda110

NumPy & SciPy for GPU - CUDA 11.0 compatible package providing GPU-accelerated computing with Python through a NumPy/SciPy-compatible array library

Pending
Overview
Eval results
Files

linear-algebra.mddocs/

Linear Algebra

GPU-accelerated linear algebra operations leveraging optimized CUDA libraries (cuBLAS, cuSOLVER) for matrix operations, decompositions, and solving linear systems. Provides comprehensive functionality for numerical linear algebra computations.

Capabilities

Matrix Products

High-performance matrix multiplication and related operations.

def dot(a, b, out=None):
    """
    Dot product of two arrays.
    
    Parameters:
    - a: array_like, first input array
    - b: array_like, second input array  
    - out: ndarray, optional output array
    
    Returns:
    cupy.ndarray: Dot product of a and b
    """

def matmul(x1, x2, out=None):
    """Matrix product of two arrays."""

def inner(a, b):
    """Inner product of two arrays."""

def outer(a, b, out=None):
    """Compute outer product of two vectors."""

def tensordot(a, b, axes=2):
    """Compute tensor dot product along specified axes."""

def kron(a, b):
    """Kronecker product of two arrays."""

def cross(a, b, axisa=-1, axisb=-1, axisc=-1, axis=None):
    """Return cross product of two (arrays) vectors."""

def vdot(a, b):
    """Return dot product of two vectors."""

Matrix Decompositions

Advanced matrix factorization methods for numerical analysis.

def svd(a, full_matrices=True, compute_uv=True):
    """
    Singular Value Decomposition.
    
    Parameters:
    - a: array_like, input matrix to decompose  
    - full_matrices: bool, compute full U and Vh matrices
    - compute_uv: bool, compute U and Vh in addition to s
    
    Returns:
    U, s, Vh: ndarrays, SVD factorization matrices
    """

def qr(a, mode='reduced'):
    """Compute QR factorization of matrix."""

def cholesky(a):
    """Compute Cholesky decomposition of positive-definite matrix."""

def eig(a):
    """Compute eigenvalues and eigenvectors of square array."""

def eigh(a, UPLO='L'):
    """Compute eigenvalues and eigenvectors of Hermitian matrix."""

def eigvals(a):
    """Compute eigenvalues of general matrix."""

def eigvalsh(a, UPLO='L'):
    """Compute eigenvalues of Hermitian matrix."""

Linear Systems

Solve linear equations and matrix inversions.

def solve(a, b):
    """
    Solve linear matrix equation ax = b.
    
    Parameters:
    - a: array_like, coefficient matrix  
    - b: array_like, ordinate values
    
    Returns:
    cupy.ndarray: Solution to system ax = b
    """

def lstsq(a, b, rcond=None):
    """Return least-squares solution to linear matrix equation."""

def inv(a):
    """Compute multiplicative inverse of matrix."""

def pinv(a, rcond=1e-15):
    """Compute Moore-Penrose pseudoinverse of matrix."""

def tensorinv(a, ind=2):
    """Compute tensor multiplicative inverse."""

def tensorsolve(a, b, axes=None):
    """Solve tensor equation a x = b for x."""

Matrix Properties

Compute various matrix properties and norms.

def norm(x, ord=None, axis=None, keepdims=False):
    """
    Matrix or vector norm.
    
    Parameters:
    - x: array_like, input array
    - ord: {non-zero int, inf, -inf, 'fro', 'nuc'}, order of norm
    - axis: int/tuple, axis along which to compute norm
    - keepdims: bool, keep dimensions of input
    
    Returns:
    cupy.ndarray: Norm of matrix or vector
    """

def det(a):
    """Compute determinant of array."""

def slogdet(a):
    """Compute sign and logarithm of determinant."""

def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None):
    """Return sum along diagonals of array."""

def matrix_rank(M, tol=None, hermitian=False):
    """Return matrix rank using SVD method."""

def cond(x, p=None):
    """Compute condition number of matrix."""

Einstein Summation

Efficient tensor operations using Einstein summation notation.

def einsum(subscripts, *operands, out=None, dtype=None, order='K', casting='safe', optimize=False):
    """
    Evaluates Einstein summation convention on operands.
    
    Parameters:
    - subscripts: str, specifies subscripts for summation
    - operands: list of array_like, arrays for operation
    - out: ndarray, optional output array
    - optimize: {False, True, 'greedy', 'optimal'}, optimization strategy
    
    Returns:
    cupy.ndarray: Calculation based on Einstein summation convention
    """

def einsum_path(subscripts, *operands, optimize='greedy'):
    """Evaluates lowest cost contraction order for einsum expression."""

Usage Examples

Basic Matrix Operations

import cupy as cp

# Create matrices
A = cp.random.random((1000, 1000))
B = cp.random.random((1000, 500))
x = cp.random.random(1000)

# Matrix multiplication
C = cp.dot(A, A.T)  # A @ A.T
D = cp.matmul(A, B)  # Matrix multiplication

# Vector operations
inner_prod = cp.inner(x, x)
outer_prod = cp.outer(x, x)

Solving Linear Systems

# Solve linear system Ax = b
A = cp.random.random((100, 100))
b = cp.random.random(100)

# Direct solution
x = cp.linalg.solve(A, b)

# Verify solution
residual = cp.linalg.norm(cp.dot(A, x) - b)

# Least squares for overdetermined system
A_over = cp.random.random((200, 100))  
b_over = cp.random.random(200)
x_lstsq, residuals, rank, s = cp.linalg.lstsq(A_over, b_over)

Matrix Decompositions

# Singular Value Decomposition
A = cp.random.random((100, 50))
U, s, Vh = cp.linalg.svd(A, full_matrices=False)

# Reconstruct matrix
A_reconstructed = cp.dot(U * s, Vh)

# Eigendecomposition
symmetric_matrix = cp.random.random((100, 100))
symmetric_matrix = (symmetric_matrix + symmetric_matrix.T) / 2

eigenvals, eigenvecs = cp.linalg.eigh(symmetric_matrix)

# QR decomposition  
Q, R = cp.linalg.qr(A)

Advanced Linear Algebra

# Matrix norms and condition numbers
A = cp.random.random((100, 100))

frobenius_norm = cp.linalg.norm(A, 'fro')
spectral_norm = cp.linalg.norm(A, 2)
condition_number = cp.linalg.cond(A)

# Determinant and trace
det_A = cp.linalg.det(A)
trace_A = cp.trace(A)

# Matrix inverse and pseudoinverse
A_inv = cp.linalg.inv(A)
A_pinv = cp.linalg.pinv(A)

# Verify inverse
identity_check = cp.allclose(cp.dot(A, A_inv), cp.eye(100))

Einstein Summation Examples

# Matrix multiplication using einsum
A = cp.random.random((10, 15))
B = cp.random.random((15, 20))
C = cp.einsum('ij,jk->ik', A, B)  # Equivalent to cp.dot(A, B)

# Batch matrix multiplication
batch_A = cp.random.random((5, 10, 15))
batch_B = cp.random.random((5, 15, 20))
batch_C = cp.einsum('bij,bjk->bik', batch_A, batch_B)

# Trace of product
A = cp.random.random((100, 100))
B = cp.random.random((100, 100))
trace_AB = cp.einsum('ij,ji->', A, B)  # Trace of A @ B

# Complex tensor operations
tensor = cp.random.random((5, 10, 15, 20))
result = cp.einsum('ijkl,jl->ik', tensor, cp.random.random((10, 20)))

Install with Tessl CLI

npx tessl i tessl/pypi-cupy-cuda110

docs

array-operations.md

cuda-interface.md

custom-kernels.md

index.md

linear-algebra.md

mathematical-functions.md

random-generation.md

scipy-extensions.md

statistics.md

tile.json