CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/npm-tensorflow-models--mobilenet

Pretrained MobileNet image classification model for TensorFlow.js enabling real-time image recognition in browsers and Node.js

Pending
Quality

Pending

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Pending

The risk profile of this skill

Overview
Eval results
Files

index.mddocs/

MobileNet

MobileNet provides pretrained image classification models for TensorFlow.js that enable real-time image recognition in web browsers and Node.js applications. MobileNets are small, low-latency, low-power convolutional neural networks parameterized to meet resource constraints while maintaining competitive accuracy. The library offers a simple JavaScript API that can classify browser-based image elements (img, video, canvas) and returns predictions with confidence scores, supporting both MobileNetV1 and V2 architectures with configurable alpha parameters to trade accuracy for performance.

Package Information

  • Package Name: @tensorflow-models/mobilenet
  • Package Type: npm
  • Language: TypeScript
  • Installation: npm install @tensorflow-models/mobilenet

Peer Dependencies: Requires @tensorflow/tfjs-core and @tensorflow/tfjs-converter (version ^4.9.0)

Core Imports

import * as mobilenet from '@tensorflow-models/mobilenet';

For ES6 modules:

import { load, MobileNet, ModelConfig } from '@tensorflow-models/mobilenet';

For CommonJS:

const mobilenet = require('@tensorflow-models/mobilenet');

Basic Usage

import * as mobilenet from '@tensorflow-models/mobilenet';

// Load the default model (MobileNetV1, alpha=1.0)
const model = await mobilenet.load();

// Get an image element from the DOM
const img = document.getElementById('myImage');

// Classify the image
const predictions = await model.classify(img);
console.log('Predictions:', predictions);
// Output: [{ className: "Egyptian cat", probability: 0.838 }, ...]

// Get embeddings for transfer learning
const embeddings = model.infer(img, true);
console.log('Embedding shape:', embeddings.shape); // [1, 1024]

// Get raw logits
const logits = model.infer(img, false);
console.log('Logits shape:', logits.shape); // [1, 1000]

Architecture

MobileNet is built around several key components:

  • Model Loading: Supports both predefined TensorFlow Hub models and custom model URLs
  • Image Processing: Handles various input types (Tensor, DOM elements) with automatic preprocessing
  • Dual Architectures: MobileNetV1 and V2 with different accuracy/performance tradeoffs
  • Alpha Scaling: Width multipliers (0.25, 0.50, 0.75, 1.0) to control model size and performance
  • Multi-Purpose Output: Supports both classification predictions and feature embeddings

Capabilities

Model Loading

Load a MobileNet model with specified configuration options.

/**
 * Loads a MobileNet model with specified configuration
 * @param modelConfig - Configuration for model loading (defaults to version: 1, alpha: 1.0)
 * @returns Promise resolving to MobileNet instance
 */
function load(modelConfig?: ModelConfig): Promise<MobileNet>;

interface ModelConfig {
  /** The MobileNet version number (1 or 2). Defaults to 1 */
  version: MobileNetVersion;
  /** Width multiplier trading accuracy for performance. Defaults to 1.0 */
  alpha?: MobileNetAlpha;
  /** Custom model url or tf.io.IOHandler object */
  modelUrl?: string | tf.io.IOHandler;
  /** Input range expected by custom models, typically [0, 1] or [-1, 1] */
  inputRange?: [number, number];
}

type MobileNetVersion = 1 | 2;
type MobileNetAlpha = 0.25 | 0.50 | 0.75 | 1.0; // Note: 0.25 only available for version 1

Usage Examples:

// Load default model (V1, alpha=1.0)
const model1 = await mobilenet.load();

// Load MobileNetV1 with smallest alpha for maximum speed
const fastModel = await mobilenet.load({
  version: 1,
  alpha: 0.25  // Only available for V1
});

// Load MobileNetV2 with alpha=0.5 for faster performance
const model2 = await mobilenet.load({
  version: 2,
  alpha: 0.5  // V2 supports 0.50, 0.75, 1.0 (no 0.25)
});

// Load custom model from URL
const model3 = await mobilenet.load({
  version: 1,
  modelUrl: 'https://my-custom-model-url',
  inputRange: [-1, 1]
});

Image Classification

Classify images and return top predicted classes with probabilities.

/**
 * Classifies an image returning top predicted classes with probabilities
 * @param img - Image to classify (Tensor, ImageData, or DOM element)
 * @param topk - Number of top predictions to return. Defaults to 3
 * @returns Promise resolving to array of predictions with class names and probabilities
 */
classify(
  img: tf.Tensor3D | ImageData | HTMLImageElement | HTMLCanvasElement | HTMLVideoElement,
  topk?: number
): Promise<Array<{className: string, probability: number}>>;

Usage Examples:

// Classify image element
const img = document.getElementById('myImage');
const predictions = await model.classify(img);
console.log(predictions);
// [{ className: "Egyptian cat", probability: 0.838 }, 
//  { className: "tabby cat", probability: 0.046 }, ...]

// Get top 5 predictions
const top5 = await model.classify(img, 5);

// Classify tensor directly
const tensor = tf.zeros([224, 224, 3]);
const tensorPredictions = await model.classify(tensor);

Feature Extraction

Extract feature embeddings or raw logits for transfer learning and custom applications.

/**
 * Computes logits or embeddings for the provided image
 * @param img - Image to process (Tensor, ImageData, or DOM element)
 * @param embedding - If true, returns embedding features. If false, returns 1000-dim logits
 * @returns Tensor containing logits or embeddings
 */
infer(
  img: tf.Tensor | ImageData | HTMLImageElement | HTMLCanvasElement | HTMLVideoElement,
  embedding?: boolean
): tf.Tensor;

Usage Examples:

// Get feature embeddings for transfer learning
const embeddings = model.infer(img, true);
console.log('Embedding shape:', embeddings.shape); // [1, 1024] for V1, varies by version/alpha

// Get raw logits for custom processing
const logits = model.infer(img, false); // default is false
console.log('Logits shape:', logits.shape); // [1, 1000]

// Process multiple images in batch (if input is batched tensor)
const batchTensor = tf.zeros([3, 224, 224, 3]);
const batchLogits = model.infer(batchTensor);
console.log('Batch logits shape:', batchLogits.shape); // [3, 1000]

Model Interface

The loaded MobileNet model instance provides these methods:

interface MobileNet {
  /** Initialize the model (called automatically by load()) */
  load(): Promise<void>;
  
  /** Extract logits or embeddings from an image */
  infer(
    img: tf.Tensor | ImageData | HTMLImageElement | HTMLCanvasElement | HTMLVideoElement,
    embedding?: boolean
  ): tf.Tensor;
  
  /** Classify an image and return top predictions */
  classify(
    img: tf.Tensor3D | ImageData | HTMLImageElement | HTMLCanvasElement | HTMLVideoElement,
    topk?: number
  ): Promise<Array<{className: string, probability: number}>>;
}

Types

/** MobileNet version numbers */
type MobileNetVersion = 1 | 2;

/** Alpha multipliers controlling model width (0.25 only available for version 1) */
type MobileNetAlpha = 0.25 | 0.50 | 0.75 | 1.0;

/** Configuration for model loading */
interface ModelConfig {
  /** The MobileNet version number (1 or 2). Defaults to 1 */
  version: MobileNetVersion;
  /** Width multiplier controlling model width. Defaults to 1.0 */
  alpha?: MobileNetAlpha;
  /** Custom model url or tf.io.IOHandler object */
  modelUrl?: string | tf.io.IOHandler;
  /** Input range expected by custom models, typically [0, 1] or [-1, 1] */
  inputRange?: [number, number];
}

/** Model information for predefined variants */
interface MobileNetInfo {
  url: string;
  inputRange: [number, number];
}

/** Classification result */
interface ClassificationResult {
  className: string;
  probability: number;
}

Version Information

/** Package version string */
export const version: string; // "2.1.1"

Error Handling

The library throws errors in the following scenarios:

  • Missing TensorFlow.js: Throws Error if @tensorflow/tfjs-core is not available
  • Invalid Version: Throws Error for unsupported version numbers (only 1 and 2 are supported)
  • Invalid Alpha: Throws Error for unsupported alpha values for the specified version
  • Model Loading: Network errors when loading models from TensorFlow Hub or custom URLs
try {
  const model = await mobilenet.load({ version: 3 }); // Invalid version
} catch (error) {
  console.error('Invalid version:', error.message);
}

Supported Input Types

All image processing methods accept these input types:

  • tf.Tensor: 3D tensor [height, width, channels] or 4D tensor [batch, height, width, channels]
  • ImageData: Canvas ImageData object
  • HTMLImageElement: HTML <img> elements
  • HTMLCanvasElement: HTML <canvas> elements
  • HTMLVideoElement: HTML <video> elements

Images are automatically preprocessed to 224x224 pixels and normalized according to the model's expected input range.

Performance Considerations

  • Version: MobileNetV1 is faster, V2 is more accurate
  • Alpha: Lower alpha values (0.25, 0.5) are faster but less accurate
  • Input Size: All inputs are resized to 224x224, larger inputs require more preprocessing
  • Memory: Call tensor.dispose() on returned tensors from infer() to prevent memory leaks

docs

index.md

tile.json