0
# BLAS Operations
1
2
High-performance Basic Linear Algebra Subprograms (BLAS) routines for vectors and matrices. Provides optimized implementations of standard linear algebra operations compatible with netlib-java for maximum performance.
3
4
**Note**: The BLAS object is marked as `private[spark]` in the source code but is essential functionality for the MLlib Local Library and is used in the test suite, making it part of the effective public API for standalone usage.
5
6
## Capabilities
7
8
### Level 1 BLAS (Vector Operations)
9
10
Fundamental vector operations that form the building blocks for more complex linear algebra computations.
11
12
```scala { .api }
13
object BLAS {
14
/** Vector addition: y += a * x */
15
def axpy(a: Double, x: Vector, y: Vector): Unit
16
17
/** Dot product: x · y */
18
def dot(x: Vector, y: Vector): Double
19
20
/** Vector copy: y = x */
21
def copy(x: Vector, y: Vector): Unit
22
23
/** Vector scaling: x = a * x */
24
def scal(a: Double, x: Vector): Unit
25
}
26
```
27
28
Usage examples:
29
30
```scala
31
import org.apache.spark.ml.linalg._
32
33
// Create vectors
34
val x = Vectors.dense(1.0, 2.0, 3.0)
35
val y = Vectors.dense(4.0, 5.0, 6.0)
36
37
// Vector operations
38
val dotProduct = BLAS.dot(x, y) // 32.0 (1*4 + 2*5 + 3*6)
39
40
// In-place operations (modify y)
41
BLAS.axpy(2.0, x, y) // y = y + 2*x = [6.0, 9.0, 12.0]
42
BLAS.scal(0.5, x) // x = 0.5*x = [0.5, 1.0, 1.5]
43
44
// Copy vector
45
val z = Vectors.zeros(3)
46
BLAS.copy(x, z) // z = x
47
```
48
49
### Level 2 BLAS (Matrix-Vector Operations)
50
51
Matrix-vector operations optimized for different matrix and vector type combinations.
52
53
```scala { .api }
54
object BLAS {
55
/** General matrix-vector multiply: y := alpha * A * x + beta * y */
56
def gemv(alpha: Double, A: Matrix, x: Vector, beta: Double, y: DenseVector): Unit
57
58
/** Symmetric packed matrix-vector multiply */
59
def dspmv(n: Int, alpha: Double, A: DenseVector, x: DenseVector, beta: Double, y: DenseVector): Unit
60
61
/** Symmetric rank-1 update with packed matrix: A += alpha * x * x^T */
62
def spr(alpha: Double, v: Vector, U: DenseVector): Unit
63
def spr(alpha: Double, v: Vector, U: Array[Double]): Unit
64
65
/** Symmetric rank-1 update: A := alpha * x * x^T + A */
66
def syr(alpha: Double, x: Vector, A: DenseMatrix): Unit
67
}
68
```
69
70
Usage examples:
71
72
```scala
73
import org.apache.spark.ml.linalg._
74
75
// Matrix-vector multiplication
76
val A = Matrices.dense(2, 2, Array(1.0, 2.0, 3.0, 4.0))
77
val x = Vectors.dense(1.0, 2.0)
78
val y = new DenseVector(Array(0.0, 0.0))
79
80
// y = 1.0 * A * x + 0.0 * y
81
BLAS.gemv(1.0, A, x, 0.0, y) // y = [7.0, 10.0]
82
83
// Symmetric rank-1 update
84
val symmetric = DenseMatrix.zeros(2, 2)
85
val v = Vectors.dense(1.0, 2.0)
86
BLAS.syr(1.0, v, symmetric) // A = A + v * v^T
87
```
88
89
### Level 3 BLAS (Matrix-Matrix Operations)
90
91
High-performance matrix-matrix operations supporting various matrix types and layouts.
92
93
```scala { .api }
94
object BLAS {
95
/** General matrix multiply: C := alpha * A * B + beta * C */
96
def gemm(alpha: Double, A: Matrix, B: DenseMatrix, beta: Double, C: DenseMatrix): Unit
97
}
98
```
99
100
Usage examples:
101
102
```scala
103
import org.apache.spark.ml.linalg._
104
105
// Matrix-matrix multiplication
106
val A = Matrices.dense(2, 3, Array(1.0, 2.0, 3.0, 4.0, 5.0, 6.0))
107
val B = DenseMatrix.ones(3, 2)
108
val C = DenseMatrix.zeros(2, 2)
109
110
// C = 1.0 * A * B + 0.0 * C
111
BLAS.gemm(1.0, A, B, 0.0, C)
112
113
// Support for sparse matrices
114
val sparseA = Matrices.sparse(2, 3, Array(0, 2, 3, 6), Array(0, 1, 1, 0, 1, 2), Array(1.0, 2.0, 3.0, 4.0, 5.0, 6.0))
115
BLAS.gemm(1.0, sparseA, B, 1.0, C) // C = A * B + C
116
```
117
118
### Advanced Operations
119
120
Specialized BLAS operations for specific matrix formats and use cases.
121
122
#### Packed Matrix Operations
123
124
Operations on matrices stored in packed format for symmetric matrices.
125
126
```scala
127
import org.apache.spark.ml.linalg._
128
129
// Symmetric rank-1 update on packed matrix
130
val n = 3
131
val packedSize = n * (n + 1) / 2 // Upper triangular packed storage
132
val packedMatrix = new Array[Double](packedSize)
133
val vector = Vectors.dense(1.0, 2.0, 3.0)
134
135
// Update packed matrix: U += alpha * v * v^T
136
BLAS.spr(1.0, vector, packedMatrix)
137
```
138
139
#### Optimized Vector Type Handling
140
141
BLAS operations automatically optimize based on vector types (dense/sparse combinations).
142
143
```scala
144
import org.apache.spark.ml.linalg._
145
146
val dense = Vectors.dense(1.0, 2.0, 3.0, 4.0)
147
val sparse = Vectors.sparse(4, Array(0, 3), Array(1.0, 4.0))
148
149
// All combinations are optimized internally
150
val dotDenseDense = BLAS.dot(dense, dense)
151
val dotDenseSparse = BLAS.dot(dense, sparse)
152
val dotSparseSparse = BLAS.dot(sparse, sparse)
153
154
// In-place operations work with mixed types
155
val result = Vectors.zeros(4).toDense
156
BLAS.axpy(2.0, sparse, result) // Efficiently handles sparse-to-dense
157
```
158
159
## Performance Notes
160
161
- **Vector Operations**: Level 1 BLAS uses Java implementation (F2jBLAS) for optimal performance on small to medium vectors
162
- **Matrix Operations**: Level 2/3 BLAS use native implementations (netlib-java) when available, falling back to Java implementations
163
- **Sparse Optimization**: Operations automatically choose optimal algorithms based on matrix/vector sparsity patterns
164
- **Memory Layout**: Operations respect matrix layout (column-major vs row-major) for cache efficiency
165
- **Type Dispatch**: Runtime dispatch optimizes operations based on actual vector/matrix types (Dense/Sparse combinations)
166
167
## Type Definitions
168
169
```scala { .api }
170
object BLAS extends Serializable {
171
// Level 1 BLAS (Vector operations)
172
def axpy(a: Double, x: Vector, y: Vector): Unit
173
def dot(x: Vector, y: Vector): Double
174
def copy(x: Vector, y: Vector): Unit
175
def scal(a: Double, x: Vector): Unit
176
177
// Level 2 BLAS (Matrix-vector operations)
178
def gemv(alpha: Double, A: Matrix, x: Vector, beta: Double, y: DenseVector): Unit
179
def dspmv(n: Int, alpha: Double, A: DenseVector, x: DenseVector, beta: Double, y: DenseVector): Unit
180
def spr(alpha: Double, v: Vector, U: DenseVector): Unit
181
def spr(alpha: Double, v: Vector, U: Array[Double]): Unit
182
def syr(alpha: Double, x: Vector, A: DenseMatrix): Unit
183
184
// Level 3 BLAS (Matrix-matrix operations)
185
def gemm(alpha: Double, A: Matrix, B: DenseMatrix, beta: Double, C: DenseMatrix): Unit
186
}
187
```