Pretrained PoseNet model in TensorFlow.js for real-time human pose estimation from images and video streams
—
Fast pose detection algorithm optimized for single person scenarios. Ideal when only one person is expected in the image.
Detects and estimates a single pose from input image using the fastest decoding algorithm.
/**
* Estimate single person pose from input image
* @param input - Input image (various formats supported)
* @param config - Configuration options for inference
* @returns Promise resolving to single detected pose
*/
estimateSinglePose(
input: PosenetInput,
config?: SinglePersonInterfaceConfig
): Promise<Pose>;Usage Examples:
import * as posenet from '@tensorflow-models/posenet';
// Load model
const net = await posenet.load();
// Basic single pose estimation
const imageElement = document.getElementById('person-image') as HTMLImageElement;
const pose = await net.estimateSinglePose(imageElement);
console.log('Overall pose confidence:', pose.score);
console.log('Number of keypoints detected:', pose.keypoints.length);
// With horizontal flipping (for webcam feeds)
const webcamPose = await net.estimateSinglePose(videoElement, {
flipHorizontal: true
});
// Process high-confidence keypoints
const highConfidenceKeypoints = pose.keypoints.filter(kp => kp.score > 0.7);
highConfidenceKeypoints.forEach(keypoint => {
console.log(`${keypoint.part}: (${keypoint.position.x}, ${keypoint.position.y}) confidence: ${keypoint.score}`);
});
// Access specific body parts
const nose = pose.keypoints.find(kp => kp.part === 'nose');
const leftWrist = pose.keypoints.find(kp => kp.part === 'leftWrist');
if (nose && leftWrist) {
const distance = Math.sqrt(
Math.pow(nose.position.x - leftWrist.position.x, 2) +
Math.pow(nose.position.y - leftWrist.position.y, 2)
);
console.log('Distance from nose to left wrist:', distance);
}Configuration options for single person pose estimation.
/**
* Configuration interface for single person pose estimation
*/
interface SinglePersonInterfaceConfig {
/** Whether to flip poses horizontally (useful for webcam feeds) */
flipHorizontal: boolean;
}const SINGLE_PERSON_INFERENCE_CONFIG: SinglePersonInterfaceConfig = {
flipHorizontal: false
};Single pose estimation supports multiple input formats:
type PosenetInput =
| ImageData // Canvas ImageData object
| HTMLImageElement // HTML img element
| HTMLCanvasElement // HTML canvas element
| HTMLVideoElement // HTML video element
| tf.Tensor3D; // TensorFlow.js 3D tensorInput Examples:
// HTML Image Element
const img = document.getElementById('photo') as HTMLImageElement;
const pose1 = await net.estimateSinglePose(img);
// HTML Video Element (for real-time processing)
const video = document.getElementById('webcam') as HTMLVideoElement;
const pose2 = await net.estimateSinglePose(video, { flipHorizontal: true });
// HTML Canvas Element
const canvas = document.getElementById('drawing') as HTMLCanvasElement;
const pose3 = await net.estimateSinglePose(canvas);
// ImageData from canvas
const ctx = canvas.getContext('2d')!;
const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
const pose4 = await net.estimateSinglePose(imageData);Single pose estimation returns a Promise that resolves to a Pose object:
/**
* Detected pose with keypoints and confidence score
*/
interface Pose {
/** Array of 17 keypoints representing body parts */
keypoints: Keypoint[];
/** Overall pose confidence score (0-1) */
score: number;
}
/**
* Individual body part keypoint with position and confidence
*/
interface Keypoint {
/** Confidence score for this keypoint (0-1) */
score: number;
/** 2D position in image coordinates */
position: Vector2D;
/** Body part name (e.g., 'nose', 'leftWrist') */
part: string;
}
interface Vector2D {
x: number;
y: number;
}Single pose estimation detects 17 standard keypoints:
| ID | Part Name | Description |
|---|---|---|
| 0 | nose | Nose tip |
| 1 | leftEye | Left eye center |
| 2 | rightEye | Right eye center |
| 3 | leftEar | Left ear |
| 4 | rightEar | Right ear |
| 5 | leftShoulder | Left shoulder |
| 6 | rightShoulder | Right shoulder |
| 7 | leftElbow | Left elbow |
| 8 | rightElbow | Right elbow |
| 9 | leftWrist | Left wrist |
| 10 | rightWrist | Right wrist |
| 11 | leftHip | Left hip |
| 12 | rightHip | Right hip |
| 13 | leftKnee | Left knee |
| 14 | rightKnee | Right knee |
| 15 | leftAnkle | Left ankle |
| 16 | rightAnkle | Right ankle |
The single pose estimation algorithm:
Install with Tessl CLI
npx tessl i tessl/npm-tensorflow-models--posenet