or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

backend-cpu.mdindex.mdshared-kernels.md
tile.json

tessl/npm-tensorflow--tfjs-backend-cpu

JavaScript CPU backend implementation for TensorFlow.js enabling machine learning operations in vanilla JavaScript

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
npmpkg:npm/@tensorflow/tfjs-backend-cpu@4.22.x

To install, run

npx @tessl/cli install tessl/npm-tensorflow--tfjs-backend-cpu@4.22.0

index.mddocs/

@tensorflow/tfjs-backend-cpu

A high-performance vanilla JavaScript backend for TensorFlow.js that provides CPU-based tensor operations without any external dependencies. This backend serves as the default fallback for TensorFlow.js and can run in any JavaScript environment including browsers, Node.js, and web workers.

Package Information

npm install @tensorflow/tfjs-backend-cpu
  • Version: 4.22.0
  • Bundle Size: Optimized for production with tree-shaking support
  • Platform Support: Browser, Node.js, Web Workers
  • Dependencies: Minimal - only requires @tensorflow/tfjs-core as peer dependency

Core Imports

TypeScript/ES Modules

// Main import - automatically registers CPU backend and all kernels
import * as tf from '@tensorflow/tfjs-backend-cpu';

// Base import - exports only core classes without auto-registration
import { MathBackendCPU, shared, version_cpu } from '@tensorflow/tfjs-backend-cpu/base';

// Selective imports for specific functionality
import { MathBackendCPU } from '@tensorflow/tfjs-backend-cpu/base';

CommonJS

// Main import with auto-registration
const tf = require('@tensorflow/tfjs-backend-cpu');

// Base import for manual setup
const { MathBackendCPU, shared, version_cpu } = require('@tensorflow/tfjs-backend-cpu/base');

Basic Usage

Automatic Backend Setup

import '@tensorflow/tfjs-backend-cpu';
import * as tf from '@tensorflow/tfjs-core';

// CPU backend is automatically registered and available
const tensor = tf.tensor2d([[1, 2], [3, 4]]);
const result = tensor.add(tf.scalar(10));
console.log(await result.data()); // [11, 12, 13, 14]

Manual Backend Management

import { MathBackendCPU } from '@tensorflow/tfjs-backend-cpu/base';
import * as tf from '@tensorflow/tfjs-core';

// Create and register backend manually
const backend = new MathBackendCPU();
tf.registerBackend('cpu', () => backend, 1 /* priority */);

// Set as active backend
await tf.setBackend('cpu');

// Use backend directly for advanced operations
const dataId = backend.write(new Float32Array([1, 2, 3, 4]), [2, 2], 'float32');
const tensorInfo = { dataId, shape: [2, 2], dtype: 'float32' as const };
const buffer = backend.bufferSync(tensorInfo);

Architecture

Core Components

The CPU backend is built on three main architectural pillars:

  1. MathBackendCPU Class - The primary backend implementation
  2. Shared Kernel Implementations - Reusable operation implementations
  3. Automatic Kernel Registration - 168+ pre-configured operations

Data Flow

// Data flow example showing backend interaction
import { MathBackendCPU } from '@tensorflow/tfjs-backend-cpu/base';

const backend = new MathBackendCPU();

// 1. Write data to backend storage
const dataId = backend.write(
  new Float32Array([1, 2, 3, 4]), // values
  [2, 2],                         // shape  
  'float32'                       // dtype
);

// 2. Create tensor info
const tensorInfo = backend.makeTensorInfo([2, 2], 'float32');

// 3. Read data (sync or async)
const syncData = backend.readSync(dataId);
const asyncData = await backend.read(dataId);

// 4. Memory management
backend.incRef(dataId);  // increment reference count
backend.decRef(dataId);  // decrement reference count
backend.disposeData(dataId); // cleanup when refCount reaches 0

Capabilities

Mathematical Operations

The CPU backend provides comprehensive mathematical operation support:

import { shared } from '@tensorflow/tfjs-backend-cpu/base';

// Basic arithmetic - optimized implementations
const addResult = shared.addImpl(
  [2, 2],                    // aShape
  [2, 2],                    // bShape  
  new Float32Array([1, 2, 3, 4]), // aVals
  new Float32Array([5, 6, 7, 8]), // bVals
  'float32'                  // dtype
); // Returns [Float32Array([6, 8, 10, 12]), [2, 2]]

// Advanced functions
const expResult = shared.expImpl(new Float32Array([0, 1, 2]));
const sqrtResult = shared.sqrtImpl(new Float32Array([1, 4, 9, 16]));

→ Complete mathematical operations documentation

Memory Management

Advanced memory management with reference counting:

const backend = new MathBackendCPU();
const dataId = backend.write(new Float32Array([1, 2, 3]), [3], 'float32');

// Reference counting
console.log(backend.refCount(dataId)); // 1
backend.incRef(dataId);
console.log(backend.refCount(dataId)); // 2

// Memory cleanup
backend.decRef(dataId);
const disposed = backend.disposeData(dataId); // true if memory freed
console.log(backend.numDataIds()); // Total number of stored tensors

→ Complete memory management documentation

Tensor Operations

Full tensor lifecycle management:

const backend = new MathBackendCPU();

// Create tensor with automatic memory management
const tensorInfo = backend.makeTensorInfo(
  [3, 3],        // shape
  'float32',     // dtype
  new Float32Array(9).fill(0) // values (optional)
);

// Buffer operations for direct data access
const buffer = backend.bufferSync(tensorInfo);
buffer.set(1.0, 0, 0); // Set value at [0, 0]
buffer.set(2.0, 1, 1); // Set value at [1, 1]

// Create output tensor from computation
const outputTensor = backend.makeOutput(
  new Float32Array([1, 4, 9]), // computed values
  [3],                         // output shape
  'float32'                    // dtype
);

→ Complete tensor operations documentation

Array Manipulation

Comprehensive array manipulation operations:

import { shared } from '@tensorflow/tfjs-backend-cpu/base';

// Concatenation with automatic shape inference
const concatResult = shared.concatImpl([
  new Float32Array([1, 2]),     // tensor1 values
  new Float32Array([3, 4])      // tensor2 values  
], [                            // output shape calculation
  [2],                          // tensor1 shape
  [2]                           // tensor2 shape
], 'float32', 0);               // axis

// Slicing operations
const sliceResult = shared.sliceImpl(
  new Float32Array([1, 2, 3, 4, 5, 6]), // input values
  [2, 3],                               // input shape
  [0, 1],                               // begin indices
  [2, 2]                                // slice sizes
);

// Transposition
const transposeResult = shared.transposeImpl(
  new Float32Array([1, 2, 3, 4]), // input values
  [2, 2],                         // input shape
  [1, 0]                          // permutation
);

→ Complete array manipulation documentation

Performance Utilities

Built-in performance monitoring and optimization:

const backend = new MathBackendCPU();

// Performance timing
const timingInfo = await backend.time(() => {
  // Your expensive operation here
  const result = shared.addImpl([1000], [1000], 
    new Float32Array(1000).fill(1), 
    new Float32Array(1000).fill(2), 
    'float32'
  );
});

console.log(`Operation took ${timingInfo.kernelMs}ms`);

// Memory monitoring  
const memInfo = backend.memory();
console.log(`Memory usage: ${memInfo.numBytes} bytes (unreliable: ${memInfo.unreliable})`);

// Backend configuration
console.log(`Block size: ${backend.blockSize}`); // 48
console.log(`Float precision: ${backend.floatPrecision()}`); // 32
console.log(`Machine epsilon: ${backend.epsilon()}`);

→ Complete performance utilities documentation

Advanced Usage

Custom Kernel Development

import { KernelConfig, KernelFunc } from '@tensorflow/tfjs-core';

// Define custom kernel implementation
const customKernelFunc: KernelFunc = ({ inputs, backend, attrs }) => {
  const { x } = inputs;
  const cpuBackend = backend as MathBackendCPU;
  
  // Use shared implementations or create custom logic
  const values = cpuBackend.readSync(x.dataId);
  const result = new Float32Array(values.length);
  
  // Custom operation logic here
  for (let i = 0; i < values.length; i++) {
    result[i] = values[i] * 2; // Example: double all values
  }
  
  return cpuBackend.makeOutput(result, x.shape, x.dtype);
};

// Register custom kernel
const customKernelConfig: KernelConfig = {
  kernelName: 'CustomOp',
  backendName: 'cpu',
  kernelFunc: customKernelFunc
};

Shared Implementation Reuse

import { shared } from '@tensorflow/tfjs-backend-cpu/base';

// Reuse optimized implementations in other backends
class CustomBackend {
  customAdd(a: TensorInfo, b: TensorInfo): TensorInfo {
    const aVals = this.readSync(a.dataId);
    const bVals = this.readSync(b.dataId);
    
    // Leverage CPU backend's optimized implementation
    const [resultVals, resultShape] = shared.addImpl(
      a.shape, b.shape, aVals, bVals, a.dtype
    );
    
    return this.makeOutput(resultVals, resultShape, a.dtype);
  }
}

Version Information

import { version_cpu } from '@tensorflow/tfjs-backend-cpu/base';

console.log(`CPU Backend Version: ${version_cpu}`); // "4.22.0"

Type Definitions

Core Types

import type {
  DataType,
  TensorInfo, 
  BackendValues,
  KernelBackend,
  DataId
} from '@tensorflow/tfjs-core';

// Backend-specific types
interface TensorData<D extends DataType> {
  values?: BackendValues;
  dtype: D;
  complexTensorInfos?: { real: TensorInfo, imag: TensorInfo };
  refCount: number;
}

// Binary operation signatures
type SimpleBinaryOperation = (a: number | string, b: number | string) => number;

type SimpleBinaryKernelImpl = (
  aShape: number[],
  bShape: number[],
  aVals: TypedArray | string[],
  bVals: TypedArray | string[],
  dtype: DataType
) => [TypedArray, number[]];

Further Documentation

Package Dependencies

{
  "peerDependencies": {
    "@tensorflow/tfjs-core": "4.22.0"
  },
  "dependencies": {
    "@types/seedrandom": "^2.4.28",
    "seedrandom": "^3.0.5"
  }
}