or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

body-part-segmentation.mdindex.mdmodel-loading.mdperson-segmentation.mdrendering-effects.mdutilities.md

body-part-segmentation.mddocs/

0

# Body Part Segmentation

1

2

Advanced body part segmentation providing pixel-level classification of 24 distinct human body parts. Enables fine-grained analysis and effects targeting specific anatomical regions like faces, hands, torso, and limbs.

3

4

## Capabilities

5

6

### Semantic Part Segmentation

7

8

Segments all body parts for all people in the image into a single combined mask.

9

10

```typescript { .api }

11

/**

12

* Segments body parts for all people in the image

13

* @param input - Image input (ImageData, HTMLImageElement, HTMLCanvasElement, HTMLVideoElement, OffscreenCanvas, tf.Tensor3D)

14

* @param config - Optional inference configuration

15

* @returns Promise resolving to semantic part segmentation result

16

*/

17

segmentPersonParts(

18

input: BodyPixInput,

19

config?: PersonInferenceConfig

20

): Promise<SemanticPartSegmentation>;

21

22

interface SemanticPartSegmentation {

23

/** Part IDs (0-23) for each pixel, -1 for background */

24

data: Int32Array;

25

/** Mask width in pixels */

26

width: number;

27

/** Mask height in pixels */

28

height: number;

29

/** Array of all detected poses */

30

allPoses: Pose[];

31

}

32

```

33

34

### Multi-Person Part Segmentation

35

36

Segments body parts for multiple people individually, providing separate part masks for each detected person.

37

38

```typescript { .api }

39

/**

40

* Segments body parts for multiple people individually

41

* @param input - Image input

42

* @param config - Optional multi-person inference configuration

43

* @returns Promise resolving to array of individual part segmentations

44

*/

45

segmentMultiPersonParts(

46

input: BodyPixInput,

47

config?: MultiPersonInstanceInferenceConfig

48

): Promise<PartSegmentation[]>;

49

50

interface PartSegmentation {

51

/** Part IDs (0-23) for each pixel, -1 for background */

52

data: Int32Array;

53

/** Mask width in pixels */

54

width: number;

55

/** Mask height in pixels */

56

height: number;

57

/** Pose keypoints for this person */

58

pose: Pose;

59

}

60

```

61

62

**Usage Examples:**

63

64

```typescript

65

import * as bodyPix from '@tensorflow-models/body-pix';

66

67

const net = await bodyPix.load();

68

const imageElement = document.getElementById('person-image');

69

70

// Semantic part segmentation for all people

71

const partSegmentation = await net.segmentPersonParts(imageElement);

72

73

// Count pixels for each body part

74

const partCounts = new Array(24).fill(0);

75

for (let i = 0; i < partSegmentation.data.length; i++) {

76

const partId = partSegmentation.data[i];

77

if (partId >= 0 && partId < 24) {

78

partCounts[partId]++;

79

}

80

}

81

82

// Multi-person part segmentation

83

const peoplePartSegmentations = await net.segmentMultiPersonParts(imageElement, {

84

maxDetections: 3,

85

scoreThreshold: 0.5

86

});

87

88

console.log(`Detected ${peoplePartSegmentations.length} people with body parts`);

89

```

90

91

## Body Part Classification

92

93

BodyPix identifies 24 distinct body parts using the following ID mapping:

94

95

```typescript { .api }

96

const PART_CHANNELS: string[] = [

97

'left_face', // 0

98

'right_face', // 1

99

'left_upper_arm_front', // 2

100

'left_upper_arm_back', // 3

101

'right_upper_arm_front', // 4

102

'right_upper_arm_back', // 5

103

'left_lower_arm_front', // 6

104

'left_lower_arm_back', // 7

105

'right_lower_arm_front', // 8

106

'right_lower_arm_back', // 9

107

'left_hand', // 10

108

'right_hand', // 11

109

'torso_front', // 12

110

'torso_back', // 13

111

'left_upper_leg_front', // 14

112

'left_upper_leg_back', // 15

113

'right_upper_leg_front', // 16

114

'right_upper_leg_back', // 17

115

'left_lower_leg_front', // 18

116

'left_lower_leg_back', // 19

117

'right_lower_leg_front', // 20

118

'right_lower_leg_back', // 21

119

'left_feet', // 22

120

'right_feet' // 23

121

];

122

```

123

124

### Part Groupings

125

126

**Face Parts:** 0-1 (left_face, right_face)

127

**Hand Parts:** 10-11 (left_hand, right_hand)

128

**Arm Parts:** 2-9 (upper/lower arms, front/back)

129

**Torso Parts:** 12-13 (torso_front, torso_back)

130

**Leg Parts:** 14-21 (upper/lower legs, front/back)

131

**Feet Parts:** 22-23 (left_feet, right_feet)

132

133

## Advanced Usage Examples

134

135

### Target Specific Body Parts

136

137

```typescript

138

import { PART_CHANNELS } from '@tensorflow-models/body-pix';

139

140

// Blur faces for privacy (parts 0 and 1)

141

const FACE_PARTS = [0, 1];

142

const partSegmentation = await net.segmentPersonParts(imageElement);

143

144

// Check if faces are detected

145

const hasFaces = partSegmentation.data.some(partId => FACE_PARTS.includes(partId));

146

if (hasFaces) {

147

// Apply face blur effect

148

bodyPix.blurBodyPart(canvas, imageElement, partSegmentation, FACE_PARTS, 15);

149

}

150

151

// Create hand-only mask

152

const HAND_PARTS = [10, 11]; // left_hand, right_hand

153

const handMask = bodyPix.toMask(partSegmentation,

154

{ r: 255, g: 255, b: 255, a: 255 }, // white hands

155

{ r: 0, g: 0, b: 0, a: 0 }, // transparent background

156

false,

157

HAND_PARTS

158

);

159

```

160

161

### Analyze Body Part Coverage

162

163

```typescript

164

function analyzeBodyParts(partSegmentation: SemanticPartSegmentation) {

165

const { data, width, height } = partSegmentation;

166

const totalPixels = width * height;

167

const partStats = new Array(24).fill(0);

168

169

// Count pixels for each part

170

for (let i = 0; i < data.length; i++) {

171

const partId = data[i];

172

if (partId >= 0 && partId < 24) {

173

partStats[partId]++;

174

}

175

}

176

177

// Calculate coverage percentages

178

const partCoverage = partStats.map((count, partId) => ({

179

partName: PART_CHANNELS[partId],

180

pixelCount: count,

181

coveragePercent: (count / totalPixels) * 100

182

}));

183

184

return partCoverage.filter(part => part.pixelCount > 0);

185

}

186

187

const partAnalysis = analyzeBodyParts(partSegmentation);

188

console.log('Visible body parts:', partAnalysis);

189

```

190

191

### Create Custom Part Masks

192

193

```typescript

194

// Create colored visualization of all parts

195

const coloredPartMask = bodyPix.toColoredPartMask(partSegmentation);

196

197

// Create mask for upper body only (torso + arms + hands + face)

198

const UPPER_BODY_PARTS = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13];

199

const upperBodyMask = bodyPix.toMask(partSegmentation,

200

{ r: 0, g: 255, b: 0, a: 255 }, // green foreground

201

{ r: 0, g: 0, b: 0, a: 0 }, // transparent background

202

false,

203

UPPER_BODY_PARTS

204

);

205

```

206

207

## Use Cases

208

209

- **Privacy protection** by blurring faces or other sensitive body parts

210

- **Fashion and retail** applications for virtual try-on of clothing

211

- **Medical analysis** for posture and body composition assessment

212

- **Sports analysis** for technique evaluation and movement tracking

213

- **Augmented reality** filters targeting specific body regions

214

- **Content moderation** for detecting inappropriate body part exposure

215

- **Animation and mocap** for detailed character rigging and animation

216

- **Accessibility tools** for gesture recognition and sign language detection