0
# Linear Algebra Operations
1
2
Matrix operations including dot products, matrix multiplication, tensor operations, and other linear algebra functions optimized for sparse matrices. These operations leverage sparsity structure for computational efficiency.
3
4
## Capabilities
5
6
### Matrix Products
7
8
Core matrix multiplication and dot product operations optimized for sparse matrices.
9
10
```python { .api }
11
def dot(a, b):
12
"""
13
Dot product of two sparse arrays.
14
15
For 1-D arrays, computes inner product. For 2-D arrays, computes matrix
16
multiplication. For higher dimensions, sums products over last axis of a
17
and second-to-last axis of b.
18
19
Parameters:
20
- a: sparse array, first input
21
- b: sparse array, second input
22
23
Returns:
24
Sparse array result of dot product
25
"""
26
27
def matmul(x1, x2):
28
"""
29
Matrix multiplication of sparse arrays.
30
31
Implements the @ operator for sparse matrices. Follows NumPy broadcasting
32
rules for batch matrix multiplication.
33
34
Parameters:
35
- x1: sparse array, first matrix operand
36
- x2: sparse array, second matrix operand
37
38
Returns:
39
Sparse array result of matrix multiplication
40
"""
41
42
def vecdot(x1, x2, axis=-1):
43
"""
44
Vector dot product along specified axis.
45
46
Computes sum of element-wise products along specified axis, treating
47
input arrays as collections of vectors.
48
49
Parameters:
50
- x1: sparse array, first vector array
51
- x2: sparse array, second vector array
52
- axis: int, axis along which to compute dot product
53
54
Returns:
55
Sparse array with dot products along specified axis
56
"""
57
```
58
59
### Tensor Operations
60
61
Advanced tensor operations for multi-dimensional sparse arrays.
62
63
```python { .api }
64
def outer(a, b):
65
"""
66
Outer product of two sparse arrays.
67
68
Computes outer product of flattened input arrays, resulting in a
69
matrix where result[i,j] = a.flat[i] * b.flat[j].
70
71
Parameters:
72
- a: sparse array, first input vector
73
- b: sparse array, second input vector
74
75
Returns:
76
Sparse 2-D array with outer product
77
"""
78
79
def kron(a, b):
80
"""
81
Kronecker product of two sparse arrays.
82
83
Computes Kronecker product, also known as tensor product. Result has
84
shape (a.shape[0]*b.shape[0], a.shape[1]*b.shape[1], ...).
85
86
Parameters:
87
- a: sparse array, first input
88
- b: sparse array, second input
89
90
Returns:
91
Sparse array with Kronecker product
92
"""
93
94
def tensordot(a, b, axes=2):
95
"""
96
Tensor dot product along specified axes.
97
98
Computes tensor contraction by summing products over specified axes.
99
More general than matrix multiplication.
100
101
Parameters:
102
- a: sparse array, first tensor
103
- b: sparse array, second tensor
104
- axes: int or sequence, axes to contract over
105
- int: contract over last N axes of a and first N axes of b
106
- sequence: explicit axis pairs to contract
107
108
Returns:
109
Sparse array result of tensor contraction
110
"""
111
112
def einsum(subscripts, *operands):
113
"""
114
Einstein summation over sparse arrays.
115
116
Computes tensor contractions using Einstein notation. Provides flexible
117
way to specify multi-dimensional array operations.
118
119
Parameters:
120
- subscripts: str, Einstein summation subscripts (e.g., 'ij,jk->ik')
121
- operands: sparse arrays, input tensors for the operation
122
123
Returns:
124
Sparse array result of Einstein summation
125
"""
126
```
127
128
### Matrix Utilities
129
130
Utility functions for matrix operations and transformations.
131
132
```python { .api }
133
def matrix_transpose(x):
134
"""
135
Transpose the last two dimensions of sparse array.
136
137
For matrices (2-D), equivalent to standard transpose. For higher
138
dimensional arrays, transposes only the last two axes.
139
140
Parameters:
141
- x: sparse array, input with at least 2 dimensions
142
143
Returns:
144
Sparse array with last two dimensions transposed
145
"""
146
```
147
148
## Usage Examples
149
150
### Basic Matrix Operations
151
152
```python
153
import sparse
154
import numpy as np
155
156
# Create sparse matrices
157
A = sparse.COO.from_numpy(np.array([[1, 0, 2], [0, 3, 0], [4, 0, 5]]))
158
B = sparse.COO.from_numpy(np.array([[2, 1], [0, 1], [1, 0]]))
159
160
# Matrix multiplication
161
C = sparse.matmul(A, B) # Matrix product A @ B
162
C_alt = sparse.dot(A, B) # Equivalent using dot
163
164
print(f"A shape: {A.shape}, B shape: {B.shape}")
165
print(f"Result shape: {C.shape}")
166
print(f"Result nnz: {C.nnz}")
167
```
168
169
### Vector Operations
170
171
```python
172
# Vector dot products
173
v1 = sparse.COO.from_numpy(np.array([1, 0, 3, 0, 2]))
174
v2 = sparse.COO.from_numpy(np.array([2, 1, 0, 1, 0]))
175
176
# Inner product of vectors
177
inner_prod = sparse.dot(v1, v2)
178
print(f"Inner product: {inner_prod.todense()}") # Scalar result
179
180
# Outer product of vectors
181
outer_prod = sparse.outer(v1, v2)
182
print(f"Outer product shape: {outer_prod.shape}")
183
print(f"Outer product nnz: {outer_prod.nnz}")
184
```
185
186
### Batch Matrix Operations
187
188
```python
189
# Batch matrix multiplication with 3D arrays
190
batch_A = sparse.random((5, 10, 20), density=0.1) # 5 matrices of 10x20
191
batch_B = sparse.random((5, 20, 15), density=0.1) # 5 matrices of 20x15
192
193
# Batch matrix multiplication
194
batch_C = sparse.matmul(batch_A, batch_B) # Result: 5 matrices of 10x15
195
print(f"Batch result shape: {batch_C.shape}")
196
197
# Vector dot product along specific axis
198
vectors = sparse.random((100, 50), density=0.05) # 100 vectors of length 50
199
weights = sparse.ones((50,)) # Weight vector
200
201
# Compute weighted sum for each vector
202
weighted_sums = sparse.vecdot(vectors, weights, axis=1)
203
print(f"Weighted sums shape: {weighted_sums.shape}")
204
```
205
206
### Advanced Tensor Operations
207
208
```python
209
# Kronecker product
210
A_small = sparse.COO.from_numpy(np.array([[1, 2], [3, 0]]))
211
B_small = sparse.COO.from_numpy(np.array([[0, 1], [1, 1]]))
212
213
kron_prod = sparse.kron(A_small, B_small)
214
print(f"Kronecker product shape: {kron_prod.shape}") # (4, 4)
215
print(f"Kronecker product nnz: {kron_prod.nnz}")
216
217
# Tensor dot product with different contraction modes
218
tensor_A = sparse.random((3, 4, 5), density=0.2)
219
tensor_B = sparse.random((5, 6, 7), density=0.2)
220
221
# Contract over one axis (default axes=1)
222
result_1 = sparse.tensordot(tensor_A, tensor_B, axes=1) # Shape: (3, 4, 6, 7)
223
224
# Contract over specific axes
225
result_2 = sparse.tensordot(tensor_A, tensor_B, axes=([2], [0])) # Same as above
226
227
print(f"Tensor contraction shape: {result_1.shape}")
228
```
229
230
### Einstein Summation Examples
231
232
```python
233
# Matrix multiplication using einsum
234
A = sparse.random((50, 30), density=0.1)
235
B = sparse.random((30, 40), density=0.1)
236
237
# Matrix multiply: 'ij,jk->ik'
238
C_einsum = sparse.einsum('ij,jk->ik', A, B)
239
C_matmul = sparse.matmul(A, B)
240
241
print(f"Results are equivalent: {np.allclose(C_einsum.todense(), C_matmul.todense())}")
242
243
# Batch inner product: 'bi,bi->b'
244
batch_vectors = sparse.random((10, 100), density=0.05)
245
inner_products = sparse.einsum('bi,bi->b', batch_vectors, batch_vectors)
246
print(f"Batch inner products shape: {inner_products.shape}")
247
248
# Trace of matrices: 'ii->'
249
square_matrix = sparse.random((20, 20), density=0.1)
250
trace = sparse.einsum('ii->', square_matrix)
251
print(f"Matrix trace: {trace.todense()}")
252
```
253
254
### Matrix Transformations
255
256
```python
257
# Matrix transpose operations
258
matrix_3d = sparse.random((5, 10, 15), density=0.1)
259
260
# Standard transpose (swap all axes)
261
full_transpose = matrix_3d.transpose()
262
print(f"Full transpose shape: {full_transpose.shape}") # (15, 10, 5)
263
264
# Matrix transpose (last two axes only)
265
matrix_transpose = sparse.matrix_transpose(matrix_3d)
266
print(f"Matrix transpose shape: {matrix_transpose.shape}") # (5, 15, 10)
267
268
# Manual axis specification
269
custom_transpose = matrix_3d.transpose((0, 2, 1)) # Same as matrix_transpose
270
print(f"Custom transpose shape: {custom_transpose.shape}") # (5, 15, 10)
271
```
272
273
### Linear Algebra with Mixed Dense/Sparse
274
275
```python
276
# Operations between sparse and dense arrays
277
sparse_matrix = sparse.random((100, 50), density=0.05)
278
dense_vector = np.random.randn(50)
279
280
# Matrix-vector product (broadcasting)
281
result = sparse.dot(sparse_matrix, dense_vector)
282
print(f"Matrix-vector result shape: {result.shape}")
283
print(f"Result is sparse: {isinstance(result, sparse.SparseArray)}")
284
285
# Mixed tensor operations
286
dense_tensor = np.random.randn(3, 50, 4)
287
sparse_tensor = sparse.random((4, 10), density=0.1)
288
289
mixed_result = sparse.tensordot(dense_tensor, sparse_tensor, axes=([2], [0]))
290
print(f"Mixed tensor result shape: {mixed_result.shape}") # (3, 50, 10)
291
```
292
293
## Performance Considerations
294
295
### Sparse Matrix Multiplication Efficiency
296
297
- **COO format**: Good for general matrix multiplication
298
- **GCXS format**: Optimal for repeated matrix-vector products
299
- **Density**: Operations most efficient when both operands are sparse
300
- **Structure**: Block-structured sparsity patterns provide best performance
301
302
### Memory Usage
303
304
- **Intermediate results**: May have different sparsity than inputs
305
- **Format conversion**: Automatic conversion to optimal format for operations
306
- **Output density**: Matrix products often denser than input matrices
307
308
### Optimization Tips
309
310
```python
311
# Convert to GCXS for repeated matrix operations
312
sparse_csr = sparse_matrix.asformat('gcxs')
313
for vector in vector_batch:
314
result = sparse.dot(sparse_csr, vector) # More efficient
315
316
# Use appropriate einsum subscripts for clarity and optimization
317
# 'ij,jk->ik' is optimized as matrix multiplication
318
# 'ij,ik->jk' may be less efficient than transpose + matmul
319
```