or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

tessl/npm-tensorflow--tfjs-backend-cpu

JavaScript CPU backend implementation for TensorFlow.js enabling machine learning operations in vanilla JavaScript

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
npmpkg:npm/@tensorflow/tfjs-backend-cpu@4.22.x

To install, run

npx @tessl/cli install tessl/npm-tensorflow--tfjs-backend-cpu@4.22.0

0

# @tensorflow/tfjs-backend-cpu

1

2

A high-performance vanilla JavaScript backend for TensorFlow.js that provides CPU-based tensor operations without any external dependencies. This backend serves as the default fallback for TensorFlow.js and can run in any JavaScript environment including browsers, Node.js, and web workers.

3

4

## Package Information

5

6

```bash

7

npm install @tensorflow/tfjs-backend-cpu

8

```

9

10

- **Version**: 4.22.0

11

- **Bundle Size**: Optimized for production with tree-shaking support

12

- **Platform Support**: Browser, Node.js, Web Workers

13

- **Dependencies**: Minimal - only requires `@tensorflow/tfjs-core` as peer dependency

14

15

## Core Imports

16

17

### TypeScript/ES Modules

18

19

```typescript { .api }

20

// Main import - automatically registers CPU backend and all kernels

21

import * as tf from '@tensorflow/tfjs-backend-cpu';

22

23

// Base import - exports only core classes without auto-registration

24

import { MathBackendCPU, shared, version_cpu } from '@tensorflow/tfjs-backend-cpu/base';

25

26

// Selective imports for specific functionality

27

import { MathBackendCPU } from '@tensorflow/tfjs-backend-cpu/base';

28

```

29

30

### CommonJS

31

32

```javascript { .api }

33

// Main import with auto-registration

34

const tf = require('@tensorflow/tfjs-backend-cpu');

35

36

// Base import for manual setup

37

const { MathBackendCPU, shared, version_cpu } = require('@tensorflow/tfjs-backend-cpu/base');

38

```

39

40

## Basic Usage

41

42

### Automatic Backend Setup

43

44

```typescript { .api }

45

import '@tensorflow/tfjs-backend-cpu';

46

import * as tf from '@tensorflow/tfjs-core';

47

48

// CPU backend is automatically registered and available

49

const tensor = tf.tensor2d([[1, 2], [3, 4]]);

50

const result = tensor.add(tf.scalar(10));

51

console.log(await result.data()); // [11, 12, 13, 14]

52

```

53

54

### Manual Backend Management

55

56

```typescript { .api }

57

import { MathBackendCPU } from '@tensorflow/tfjs-backend-cpu/base';

58

import * as tf from '@tensorflow/tfjs-core';

59

60

// Create and register backend manually

61

const backend = new MathBackendCPU();

62

tf.registerBackend('cpu', () => backend, 1 /* priority */);

63

64

// Set as active backend

65

await tf.setBackend('cpu');

66

67

// Use backend directly for advanced operations

68

const dataId = backend.write(new Float32Array([1, 2, 3, 4]), [2, 2], 'float32');

69

const tensorInfo = { dataId, shape: [2, 2], dtype: 'float32' as const };

70

const buffer = backend.bufferSync(tensorInfo);

71

```

72

73

## Architecture

74

75

### Core Components

76

77

The CPU backend is built on three main architectural pillars:

78

79

1. **[MathBackendCPU Class](./backend-cpu.md)** - The primary backend implementation

80

2. **[Shared Kernel Implementations](./shared-kernels.md)** - Reusable operation implementations

81

3. **Automatic Kernel Registration** - 168+ pre-configured operations

82

83

### Data Flow

84

85

```typescript { .api }

86

// Data flow example showing backend interaction

87

import { MathBackendCPU } from '@tensorflow/tfjs-backend-cpu/base';

88

89

const backend = new MathBackendCPU();

90

91

// 1. Write data to backend storage

92

const dataId = backend.write(

93

new Float32Array([1, 2, 3, 4]), // values

94

[2, 2], // shape

95

'float32' // dtype

96

);

97

98

// 2. Create tensor info

99

const tensorInfo = backend.makeTensorInfo([2, 2], 'float32');

100

101

// 3. Read data (sync or async)

102

const syncData = backend.readSync(dataId);

103

const asyncData = await backend.read(dataId);

104

105

// 4. Memory management

106

backend.incRef(dataId); // increment reference count

107

backend.decRef(dataId); // decrement reference count

108

backend.disposeData(dataId); // cleanup when refCount reaches 0

109

```

110

111

## Capabilities

112

113

### Mathematical Operations

114

115

The CPU backend provides comprehensive mathematical operation support:

116

117

```typescript { .api }

118

import { shared } from '@tensorflow/tfjs-backend-cpu/base';

119

120

// Basic arithmetic - optimized implementations

121

const addResult = shared.addImpl(

122

[2, 2], // aShape

123

[2, 2], // bShape

124

new Float32Array([1, 2, 3, 4]), // aVals

125

new Float32Array([5, 6, 7, 8]), // bVals

126

'float32' // dtype

127

); // Returns [Float32Array([6, 8, 10, 12]), [2, 2]]

128

129

// Advanced functions

130

const expResult = shared.expImpl(new Float32Array([0, 1, 2]));

131

const sqrtResult = shared.sqrtImpl(new Float32Array([1, 4, 9, 16]));

132

```

133

134

**[→ Complete mathematical operations documentation](./shared-kernels.md#mathematical-operations)**

135

136

### Memory Management

137

138

Advanced memory management with reference counting:

139

140

```typescript { .api }

141

const backend = new MathBackendCPU();

142

const dataId = backend.write(new Float32Array([1, 2, 3]), [3], 'float32');

143

144

// Reference counting

145

console.log(backend.refCount(dataId)); // 1

146

backend.incRef(dataId);

147

console.log(backend.refCount(dataId)); // 2

148

149

// Memory cleanup

150

backend.decRef(dataId);

151

const disposed = backend.disposeData(dataId); // true if memory freed

152

console.log(backend.numDataIds()); // Total number of stored tensors

153

```

154

155

**[→ Complete memory management documentation](./backend-cpu.md#memory-management-methods)**

156

157

### Tensor Operations

158

159

Full tensor lifecycle management:

160

161

```typescript { .api }

162

const backend = new MathBackendCPU();

163

164

// Create tensor with automatic memory management

165

const tensorInfo = backend.makeTensorInfo(

166

[3, 3], // shape

167

'float32', // dtype

168

new Float32Array(9).fill(0) // values (optional)

169

);

170

171

// Buffer operations for direct data access

172

const buffer = backend.bufferSync(tensorInfo);

173

buffer.set(1.0, 0, 0); // Set value at [0, 0]

174

buffer.set(2.0, 1, 1); // Set value at [1, 1]

175

176

// Create output tensor from computation

177

const outputTensor = backend.makeOutput(

178

new Float32Array([1, 4, 9]), // computed values

179

[3], // output shape

180

'float32' // dtype

181

);

182

```

183

184

**[→ Complete tensor operations documentation](./backend-cpu.md#core-methods)**

185

186

### Array Manipulation

187

188

Comprehensive array manipulation operations:

189

190

```typescript { .api }

191

import { shared } from '@tensorflow/tfjs-backend-cpu/base';

192

193

// Concatenation with automatic shape inference

194

const concatResult = shared.concatImpl([

195

new Float32Array([1, 2]), // tensor1 values

196

new Float32Array([3, 4]) // tensor2 values

197

], [ // output shape calculation

198

[2], // tensor1 shape

199

[2] // tensor2 shape

200

], 'float32', 0); // axis

201

202

// Slicing operations

203

const sliceResult = shared.sliceImpl(

204

new Float32Array([1, 2, 3, 4, 5, 6]), // input values

205

[2, 3], // input shape

206

[0, 1], // begin indices

207

[2, 2] // slice sizes

208

);

209

210

// Transposition

211

const transposeResult = shared.transposeImpl(

212

new Float32Array([1, 2, 3, 4]), // input values

213

[2, 2], // input shape

214

[1, 0] // permutation

215

);

216

```

217

218

**[→ Complete array manipulation documentation](./shared-kernels.md#array-manipulation)**

219

220

### Performance Utilities

221

222

Built-in performance monitoring and optimization:

223

224

```typescript { .api }

225

const backend = new MathBackendCPU();

226

227

// Performance timing

228

const timingInfo = await backend.time(() => {

229

// Your expensive operation here

230

const result = shared.addImpl([1000], [1000],

231

new Float32Array(1000).fill(1),

232

new Float32Array(1000).fill(2),

233

'float32'

234

);

235

});

236

237

console.log(`Operation took ${timingInfo.kernelMs}ms`);

238

239

// Memory monitoring

240

const memInfo = backend.memory();

241

console.log(`Memory usage: ${memInfo.numBytes} bytes (unreliable: ${memInfo.unreliable})`);

242

243

// Backend configuration

244

console.log(`Block size: ${backend.blockSize}`); // 48

245

console.log(`Float precision: ${backend.floatPrecision()}`); // 32

246

console.log(`Machine epsilon: ${backend.epsilon()}`);

247

```

248

249

**[→ Complete performance utilities documentation](./backend-cpu.md#utility-methods)**

250

251

## Advanced Usage

252

253

### Custom Kernel Development

254

255

```typescript { .api }

256

import { KernelConfig, KernelFunc } from '@tensorflow/tfjs-core';

257

258

// Define custom kernel implementation

259

const customKernelFunc: KernelFunc = ({ inputs, backend, attrs }) => {

260

const { x } = inputs;

261

const cpuBackend = backend as MathBackendCPU;

262

263

// Use shared implementations or create custom logic

264

const values = cpuBackend.readSync(x.dataId);

265

const result = new Float32Array(values.length);

266

267

// Custom operation logic here

268

for (let i = 0; i < values.length; i++) {

269

result[i] = values[i] * 2; // Example: double all values

270

}

271

272

return cpuBackend.makeOutput(result, x.shape, x.dtype);

273

};

274

275

// Register custom kernel

276

const customKernelConfig: KernelConfig = {

277

kernelName: 'CustomOp',

278

backendName: 'cpu',

279

kernelFunc: customKernelFunc

280

};

281

```

282

283

### Shared Implementation Reuse

284

285

```typescript { .api }

286

import { shared } from '@tensorflow/tfjs-backend-cpu/base';

287

288

// Reuse optimized implementations in other backends

289

class CustomBackend {

290

customAdd(a: TensorInfo, b: TensorInfo): TensorInfo {

291

const aVals = this.readSync(a.dataId);

292

const bVals = this.readSync(b.dataId);

293

294

// Leverage CPU backend's optimized implementation

295

const [resultVals, resultShape] = shared.addImpl(

296

a.shape, b.shape, aVals, bVals, a.dtype

297

);

298

299

return this.makeOutput(resultVals, resultShape, a.dtype);

300

}

301

}

302

```

303

304

## Version Information

305

306

```typescript { .api }

307

import { version_cpu } from '@tensorflow/tfjs-backend-cpu/base';

308

309

console.log(`CPU Backend Version: ${version_cpu}`); // "4.22.0"

310

```

311

312

## Type Definitions

313

314

### Core Types

315

316

```typescript { .api }

317

import type {

318

DataType,

319

TensorInfo,

320

BackendValues,

321

KernelBackend,

322

DataId

323

} from '@tensorflow/tfjs-core';

324

325

// Backend-specific types

326

interface TensorData<D extends DataType> {

327

values?: BackendValues;

328

dtype: D;

329

complexTensorInfos?: { real: TensorInfo, imag: TensorInfo };

330

refCount: number;

331

}

332

333

// Binary operation signatures

334

type SimpleBinaryOperation = (a: number | string, b: number | string) => number;

335

336

type SimpleBinaryKernelImpl = (

337

aShape: number[],

338

bShape: number[],

339

aVals: TypedArray | string[],

340

bVals: TypedArray | string[],

341

dtype: DataType

342

) => [TypedArray, number[]];

343

```

344

345

## Further Documentation

346

347

- **[MathBackendCPU Class](./backend-cpu.md)** - Complete API reference for the main backend class

348

- **[Shared Kernel Implementations](./shared-kernels.md)** - Detailed documentation of all shared operation implementations

349

350

## Package Dependencies

351

352

```json

353

{

354

"peerDependencies": {

355

"@tensorflow/tfjs-core": "4.22.0"

356

},

357

"dependencies": {

358

"@types/seedrandom": "^2.4.28",

359

"seedrandom": "^3.0.5"

360

}

361

}

362

```