Pretrained PoseNet model in TensorFlow.js for real-time human pose estimation from images and video streams
—
PoseNet model loading and configuration system for selecting neural network architectures and performance parameters.
Loads a PoseNet model with configurable architecture and performance settings.
/**
* Load PoseNet model instance from checkpoint with configurable architecture
* @param config - Model configuration options, defaults to MobileNetV1 setup
* @returns Promise resolving to configured PoseNet instance
*/
function load(config?: ModelConfig): Promise<PoseNet>;Usage Examples:
import * as posenet from '@tensorflow-models/posenet';
// Default MobileNetV1 model (fastest)
const net = await posenet.load();
// MobileNetV1 with custom configuration
const mobileNet = await posenet.load({
architecture: 'MobileNetV1',
outputStride: 16,
inputResolution: { width: 640, height: 480 },
multiplier: 0.75
});
// ResNet50 model (most accurate)
const resNet = await posenet.load({
architecture: 'ResNet50',
outputStride: 32,
inputResolution: { width: 257, height: 200 },
quantBytes: 2
});
// Custom model URL
const customNet = await posenet.load({
architecture: 'MobileNetV1',
outputStride: 16,
inputResolution: 257,
modelUrl: 'https://example.com/custom-posenet-model.json'
});Configuration interface for customizing model architecture and performance characteristics.
/**
* Configuration options for PoseNet model loading
*/
interface ModelConfig {
/** Neural network architecture: MobileNetV1 (fast) or ResNet50 (accurate) */
architecture: PoseNetArchitecture;
/** Output stride controlling resolution vs speed trade-off */
outputStride: PoseNetOutputStride;
/** Input image resolution for processing */
inputResolution: InputResolution;
/** MobileNetV1 depth multiplier (MobileNetV1 only) */
multiplier?: MobileNetMultiplier;
/** Custom model URL for local development or restricted access */
modelUrl?: string;
/** Weight quantization bytes for model size vs accuracy */
quantBytes?: PoseNetQuantBytes;
}
type PoseNetArchitecture = 'ResNet50' | 'MobileNetV1';
type PoseNetOutputStride = 32 | 16 | 8;
type MobileNetMultiplier = 0.50 | 0.75 | 1.0;
type PoseNetQuantBytes = 1 | 2 | 4;
type InputResolution = number | {width: number, height: number};Choose between two pre-trained neural network architectures:
MobileNetV1:
ResNet50:
Output Stride:
Input Resolution:
Multiplier (MobileNetV1 only):
Quantization Bytes:
The default configuration when no parameters are provided:
const MOBILENET_V1_CONFIG: ModelConfig = {
architecture: 'MobileNetV1',
outputStride: 16,
multiplier: 0.75,
inputResolution: 257,
};The main class returned by the load function, providing pose estimation methods.
/**
* Main PoseNet class for pose estimation
*/
class PoseNet {
/** Underlying neural network model */
readonly baseModel: BaseModel;
/** Model input resolution as [height, width] */
readonly inputResolution: [number, number];
/** Estimate single person pose from input image */
estimateSinglePose(input: PosenetInput, config?: SinglePersonInterfaceConfig): Promise<Pose>;
/** Estimate multiple person poses from input image */
estimateMultiplePoses(input: PosenetInput, config?: MultiPersonInferenceConfig): Promise<Pose[]>;
/** Release GPU/CPU memory allocated by the model */
dispose(): void;
}
type PosenetInput = ImageData | HTMLImageElement | HTMLCanvasElement | HTMLVideoElement | tf.Tensor3D;Install with Tessl CLI
npx tessl i tessl/npm-tensorflow-models--posenet