or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

export-deployment.mdindex.mdresults-visualization.mdspecialized-models.mdtracking.mdtraining-validation.mdutilities.mdyolo-models.md

tracking.mddocs/

0

# Object Tracking

1

2

Multi-object tracking capabilities with BOTSORT and BYTETracker algorithms for tracking objects across video frames with persistent identity assignment.

3

4

## Capabilities

5

6

### Multi-Object Tracking

7

8

Track multiple objects across video frames while maintaining consistent identity assignments.

9

10

```python { .api }

11

def track(self, source, stream=False, persist=False, tracker="bytetrack.yaml", **kwargs) -> List[Results]:

12

"""

13

Perform multi-object tracking on video input.

14

15

Parameters:

16

- source: Video source (file path, URL, camera index, etc.)

17

- stream (bool): Process video as stream (default: False)

18

- persist (bool): Persist tracks between predict calls (default: False)

19

- tracker (str): Tracker configuration file ('bytetrack.yaml', 'botsort.yaml')

20

- conf (float): Detection confidence threshold (default: 0.3)

21

- iou (float): IoU threshold for NMS (default: 0.5)

22

- imgsz (int): Image size for inference (default: 640)

23

- device (str): Device to run on ('cpu', '0', etc.)

24

- show (bool): Display tracking results (default: False)

25

- save (bool): Save tracking results (default: False)

26

- save_txt (bool): Save results as txt files (default: False)

27

- save_conf (bool): Include confidence in saved results (default: False)

28

- save_crop (bool): Save cropped tracked objects (default: False)

29

- line_width (int): Line thickness for visualization (default: None)

30

- vid_stride (int): Video frame-rate stride (default: 1)

31

- **kwargs: Additional arguments

32

33

Returns:

34

List[Results]: Tracking results with object IDs and trajectories

35

"""

36

```

37

38

**Available Trackers:**

39

- **ByteTrack**: High-performance tracker focusing on association accuracy

40

- **BoTSORT**: Combines detection and ReID features for robust tracking

41

42

**Usage Examples:**

43

44

```python

45

from ultralytics import YOLO

46

47

# Load a detection model

48

model = YOLO("yolo11n.pt")

49

50

# Track objects in video

51

results = model.track(source="video.mp4", show=True, save=True)

52

53

# Track with custom tracker

54

results = model.track(

55

source="video.mp4",

56

tracker="botsort.yaml",

57

conf=0.3,

58

iou=0.5

59

)

60

61

# Track from webcam

62

results = model.track(source=0, show=True)

63

64

# Stream tracking with persistence

65

for result in model.track(source="video.mp4", stream=True, persist=True):

66

# Process each frame

67

if result.boxes is not None:

68

track_ids = result.boxes.id.cpu().numpy() if result.boxes.id is not None else []

69

boxes = result.boxes.xyxy.cpu().numpy()

70

71

for track_id, box in zip(track_ids, boxes):

72

print(f"Track ID: {track_id}, Box: {box}")

73

74

# Display frame

75

result.show()

76

```

77

78

### Tracking Configuration

79

80

Customize tracking behavior through YAML configuration files.

81

82

#### ByteTrack Configuration (`bytetrack.yaml`)

83

84

```yaml

85

tracker_type: bytetrack

86

track_high_thresh: 0.5 # High confidence detection threshold

87

track_low_thresh: 0.1 # Low confidence detection threshold

88

new_track_thresh: 0.6 # New track confirmation threshold

89

track_buffer: 30 # Number of frames to keep lost tracks

90

match_thresh: 0.8 # Matching threshold for association

91

min_box_area: 10 # Minimum bounding box area

92

mot20: False # Use MOT20 evaluation protocol

93

```

94

95

#### BoTSORT Configuration (`botsort.yaml`)

96

97

```yaml

98

tracker_type: botsort

99

track_high_thresh: 0.5 # High confidence detection threshold

100

track_low_thresh: 0.1 # Low confidence detection threshold

101

new_track_thresh: 0.6 # New track confirmation threshold

102

track_buffer: 30 # Number of frames to keep lost tracks

103

match_thresh: 0.8 # Matching threshold for association

104

gmc_method: sparseOptFlow # Global motion compensation method

105

proximity_thresh: 0.5 # Spatial proximity threshold

106

appearance_thresh: 0.25 # Appearance similarity threshold

107

cmc_method: sparseOptFlow # Camera motion compensation

108

with_reid: False # Use ReID features

109

```

110

111

**Custom Tracker Configuration:**

112

113

```python

114

# Create custom tracker config

115

tracker_config = {

116

'tracker_type': 'bytetrack',

117

'track_high_thresh': 0.6,

118

'track_low_thresh': 0.2,

119

'new_track_thresh': 0.7,

120

'track_buffer': 50,

121

'match_thresh': 0.9

122

}

123

124

# Save config to YAML file

125

import yaml

126

with open('custom_tracker.yaml', 'w') as f:

127

yaml.dump(tracker_config, f)

128

129

# Use custom config

130

results = model.track(source="video.mp4", tracker="custom_tracker.yaml")

131

```

132

133

### Tracking Results Processing

134

135

Access and process tracking results including object IDs and trajectories.

136

137

```python { .api }

138

class Results:

139

def __init__(self):

140

self.boxes: Optional[Boxes] = None # Detection boxes with tracking IDs

141

142

class Boxes:

143

def __init__(self):

144

self.id: Optional[torch.Tensor] = None # Track IDs

145

self.xyxy: torch.Tensor = None # Bounding boxes

146

self.conf: torch.Tensor = None # Confidence scores

147

self.cls: torch.Tensor = None # Class predictions

148

```

149

150

**Usage Examples:**

151

152

```python

153

# Process tracking results

154

results = model.track(source="video.mp4")

155

156

for frame_idx, result in enumerate(results):

157

if result.boxes is not None and result.boxes.id is not None:

158

# Extract tracking information

159

track_ids = result.boxes.id.cpu().numpy()

160

boxes = result.boxes.xyxy.cpu().numpy()

161

confidences = result.boxes.conf.cpu().numpy()

162

classes = result.boxes.cls.cpu().numpy()

163

164

print(f"Frame {frame_idx}:")

165

for i, track_id in enumerate(track_ids):

166

x1, y1, x2, y2 = boxes[i]

167

conf = confidences[i]

168

cls = classes[i]

169

170

print(f" Track ID: {track_id}, Class: {cls}, "

171

f"Conf: {conf:.2f}, Box: [{x1:.1f}, {y1:.1f}, {x2:.1f}, {y2:.1f}]")

172

```

173

174

### Trajectory Analysis

175

176

Analyze object trajectories and movement patterns.

177

178

```python

179

class TrajectoryAnalyzer:

180

def __init__(self):

181

self.tracks = {} # Store trajectory data per track ID

182

183

def update(self, result):

184

"""Update trajectory data with new frame results."""

185

if result.boxes is not None and result.boxes.id is not None:

186

track_ids = result.boxes.id.cpu().numpy()

187

boxes = result.boxes.xyxy.cpu().numpy()

188

189

for track_id, box in zip(track_ids, boxes):

190

if track_id not in self.tracks:

191

self.tracks[track_id] = []

192

193

# Calculate center point

194

center_x = (box[0] + box[2]) / 2

195

center_y = (box[1] + box[3]) / 2

196

197

self.tracks[track_id].append((center_x, center_y))

198

199

def get_trajectory(self, track_id):

200

"""Get complete trajectory for a track ID."""

201

return self.tracks.get(track_id, [])

202

203

def calculate_speed(self, track_id, fps=30):

204

"""Calculate average speed for a track."""

205

trajectory = self.get_trajectory(track_id)

206

if len(trajectory) < 2:

207

return 0

208

209

total_distance = 0

210

for i in range(1, len(trajectory)):

211

dx = trajectory[i][0] - trajectory[i-1][0]

212

dy = trajectory[i][1] - trajectory[i-1][1]

213

distance = (dx**2 + dy**2)**0.5

214

total_distance += distance

215

216

# Convert to pixels per second

217

time_duration = len(trajectory) / fps

218

return total_distance / time_duration if time_duration > 0 else 0

219

220

# Usage example

221

analyzer = TrajectoryAnalyzer()

222

results = model.track(source="video.mp4", stream=True)

223

224

for result in results:

225

analyzer.update(result)

226

227

# Analyze trajectories periodically

228

for track_id in analyzer.tracks:

229

speed = analyzer.calculate_speed(track_id)

230

trajectory = analyzer.get_trajectory(track_id)

231

print(f"Track {track_id}: Speed={speed:.1f} px/s, Points={len(trajectory)}")

232

```

233

234

### Advanced Tracking Features

235

236

#### Region of Interest (ROI) Tracking

237

238

```python

239

import cv2

240

import numpy as np

241

242

def track_in_roi(model, source, roi_polygon):

243

"""Track objects only within specified region."""

244

results = model.track(source=source, stream=True)

245

246

for result in results:

247

if result.boxes is not None and result.boxes.id is not None:

248

boxes = result.boxes.xyxy.cpu().numpy()

249

track_ids = result.boxes.id.cpu().numpy()

250

251

for i, (track_id, box) in enumerate(zip(track_ids, boxes)):

252

# Calculate center point

253

center_x = (box[0] + box[2]) / 2

254

center_y = (box[1] + box[3]) / 2

255

256

# Check if center is inside ROI

257

if cv2.pointPolygonTest(roi_polygon, (center_x, center_y), False) >= 0:

258

print(f"Track {track_id} is in ROI")

259

260

# Define ROI polygon (example: quadrilateral)

261

roi = np.array([[100, 100], [500, 100], [500, 400], [100, 400]], np.int32)

262

track_in_roi(model, "video.mp4", roi)

263

```

264

265

#### Cross-Line Counting

266

267

```python

268

class LineCounter:

269

def __init__(self, line_start, line_end):

270

self.line_start = line_start

271

self.line_end = line_end

272

self.crossed_tracks = set()

273

self.count = 0

274

275

def check_crossing(self, track_id, prev_center, curr_center):

276

"""Check if track crossed the counting line."""

277

# Line crossing detection using cross product

278

def ccw(A, B, C):

279

return (C[1] - A[1]) * (B[0] - A[0]) > (B[1] - A[1]) * (C[0] - A[0])

280

281

def intersect(A, B, C, D):

282

return ccw(A, C, D) != ccw(B, C, D) and ccw(A, B, C) != ccw(A, B, D)

283

284

if prev_center is not None:

285

if intersect(prev_center, curr_center, self.line_start, self.line_end):

286

if track_id not in self.crossed_tracks:

287

self.crossed_tracks.add(track_id)

288

self.count += 1

289

return True

290

return False

291

292

# Usage example

293

counter = LineCounter((0, 300), (640, 300)) # Horizontal line at y=300

294

prev_centers = {}

295

296

results = model.track(source="video.mp4", stream=True)

297

for result in results:

298

if result.boxes is not None and result.boxes.id is not None:

299

track_ids = result.boxes.id.cpu().numpy()

300

boxes = result.boxes.xyxy.cpu().numpy()

301

302

for track_id, box in zip(track_ids, boxes):

303

center = ((box[0] + box[2]) / 2, (box[1] + box[3]) / 2)

304

305

if counter.check_crossing(track_id, prev_centers.get(track_id), center):

306

print(f"Track {track_id} crossed line! Total count: {counter.count}")

307

308

prev_centers[track_id] = center

309

```

310

311

## Types

312

313

```python { .api }

314

from typing import List, Optional, Dict, Tuple, Any

315

import torch

316

import numpy as np

317

318

# Tracking result types

319

TrackID = int

320

Trajectory = List[Tuple[float, float]] # List of (x, y) center points

321

TrackingResults = List[Results]

322

323

# Tracker configuration types

324

TrackerConfig = Dict[str, Any]

325

TrackerType = str # 'bytetrack' or 'botsort'

326

327

# Geometry types for ROI and line counting

328

Point = Tuple[float, float]

329

Polygon = np.ndarray # Array of points defining polygon

330

Line = Tuple[Point, Point] # Start and end points of line

331

332

# Enhanced Results class for tracking

333

class Results:

334

boxes: Optional['Boxes']

335

336

class Boxes:

337

id: Optional[torch.Tensor] # Track IDs [N]

338

xyxy: torch.Tensor # Bounding boxes [N, 4]

339

conf: torch.Tensor # Confidence scores [N]

340

cls: torch.Tensor # Class predictions [N]

341

```