or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

annotation-framework.mdcli.mdcoco-integration.mdimage-slicing.mdindex.mdmodel-integration.mdpostprocessing.mdprediction-functions.mdutilities.md

cli.mddocs/

0

# Command Line Interface

1

2

SAHI provides a comprehensive command-line interface accessible through the `sahi` command. The CLI covers prediction, dataset processing, evaluation, format conversion, and system utilities with extensive configuration options.

3

4

## Capabilities

5

6

### Main CLI Structure

7

8

The SAHI CLI is organized into main commands and sub-commands using the Fire library for automatic CLI generation.

9

10

```bash { .api }

11

# Main command structure

12

sahi <command> [options]

13

14

# Available main commands:

15

sahi predict # Run predictions on images/videos

16

sahi predict-fiftyone # Run predictions with FiftyOne integration

17

sahi coco # COCO dataset operations

18

sahi version # Show SAHI version

19

sahi env # Show environment information

20

```

21

22

### Prediction Commands

23

24

#### Standard Prediction

25

26

Run object detection with sliced inference on images, directories, or videos.

27

28

```bash { .api }

29

sahi predict \

30

--model_type ultralytics \

31

--model_path yolov8n.pt \

32

--source path/to/image.jpg \

33

--slice_height 640 \

34

--slice_width 640 \

35

--overlap_height_ratio 0.2 \

36

--overlap_width_ratio 0.2 \

37

--confidence_threshold 0.25 \

38

--device cuda:0 \

39

--export_visual True \

40

--export_crop False \

41

--export_pickle False \

42

--project runs/predict \

43

--name experiment_1

44

45

# Parameters:

46

# --model_type: Framework type (ultralytics, mmdet, detectron2, huggingface, etc.)

47

# --model_path: Path to model weights file

48

# --source: Input path (image, directory, or video)

49

# --slice_height: Height of each slice in pixels

50

# --slice_width: Width of each slice in pixels

51

# --overlap_height_ratio: Vertical overlap between slices (0-1)

52

# --overlap_width_ratio: Horizontal overlap between slices (0-1)

53

# --confidence_threshold: Minimum detection confidence (0-1)

54

# --device: Device for inference (cpu, cuda, cuda:0, etc.)

55

# --export_visual: Export visualization images (True/False)

56

# --export_crop: Export cropped detected objects (True/False)

57

# --export_pickle: Export predictions as pickle files (True/False)

58

# --project: Base output directory

59

# --name: Experiment name for output subdirectory

60

```

61

62

#### FiftyOne Integration

63

64

Run predictions with automatic FiftyOne dataset integration.

65

66

```bash { .api }

67

sahi predict-fiftyone \

68

--model_type ultralytics \

69

--model_path yolov8n.pt \

70

--dataset_json_path dataset.json \

71

--image_dir images/ \

72

--slice_height 512 \

73

--slice_width 512 \

74

--dataset_name "my_predictions" \

75

--model_name "yolov8_sahi"

76

77

# Parameters:

78

# --dataset_json_path: Path to COCO format dataset JSON

79

# --image_dir: Directory containing dataset images

80

# --dataset_name: Name for FiftyOne dataset

81

# --model_name: Model identifier in FiftyOne

82

```

83

84

### COCO Dataset Operations

85

86

Comprehensive COCO dataset processing including slicing, evaluation, error analysis, and format conversion.

87

88

#### Dataset Slicing

89

90

Slice COCO datasets for improved small object detection performance.

91

92

```bash { .api }

93

sahi coco slice \

94

--image_dir images/ \

95

--dataset_json_path annotations.json \

96

--slice_height 512 \

97

--slice_width 512 \

98

--overlap_height_ratio 0.2 \

99

--overlap_width_ratio 0.2 \

100

--min_area_ratio 0.1 \

101

--ignore_negative_samples False \

102

--output_dir sliced_dataset/ \

103

--output_file_name sliced_annotations.json

104

105

# Parameters:

106

# --image_dir: Directory containing dataset images

107

# --dataset_json_path: Path to COCO format JSON file

108

# --slice_height: Height of each slice

109

# --slice_width: Width of each slice

110

# --overlap_height_ratio: Vertical overlap between slices

111

# --overlap_width_ratio: Horizontal overlap between slices

112

# --min_area_ratio: Minimum annotation area ratio to keep (0-1)

113

# --ignore_negative_samples: Skip slices without annotations (True/False)

114

# --output_dir: Output directory for sliced dataset

115

# --output_file_name: Name for output annotation file

116

```

117

118

#### Model Evaluation

119

120

Evaluate detection models on COCO datasets with comprehensive metrics.

121

122

```bash { .api }

123

sahi coco evaluate \

124

--dataset_json_path ground_truth.json \

125

--result_json_path predictions.json \

126

--out_dir evaluation_results/ \

127

--type bbox \

128

--areas all \

129

--max_detections 100 \

130

--return_dict False \

131

--classwise True

132

133

# Parameters:

134

# --dataset_json_path: Path to ground truth COCO JSON

135

# --result_json_path: Path to predictions JSON

136

# --out_dir: Output directory for evaluation results

137

# --type: Evaluation type (bbox, segm)

138

# --areas: Area ranges to evaluate (all, small, medium, large)

139

# --max_detections: Maximum detections per image

140

# --return_dict: Return results as dictionary (True/False)

141

# --classwise: Generate per-class metrics (True/False)

142

```

143

144

#### Error Analysis

145

146

Perform detailed error analysis on detection results to identify failure modes.

147

148

```bash { .api }

149

sahi coco analyse \

150

--dataset_json_path ground_truth.json \

151

--result_json_path predictions.json \

152

--out_dir analysis_results/ \

153

--type bbox \

154

--classwise True \

155

--max_detections 100

156

157

# Parameters:

158

# --dataset_json_path: Path to ground truth annotations

159

# --result_json_path: Path to model predictions

160

# --out_dir: Output directory for analysis results

161

# --type: Analysis type (bbox, segm)

162

# --classwise: Generate per-class analysis (True/False)

163

# --max_detections: Maximum detections to analyze per image

164

```

165

166

#### Format Conversion

167

168

Convert COCO datasets to other formats like YOLO.

169

170

```bash { .api }

171

sahi coco yolo \

172

--coco_annotation_file_path annotations.json \

173

--image_dir images/ \

174

--output_dir yolo_dataset/ \

175

--train_split_rate 0.8 \

176

--numpy_seed 42

177

178

# Alias commands (same functionality):

179

sahi coco yolov5 # Same as 'yolo'

180

181

# Parameters:

182

# --coco_annotation_file_path: Path to COCO JSON file

183

# --image_dir: Directory containing images

184

# --output_dir: Output directory for YOLO format files

185

# --train_split_rate: Fraction for training set (0-1)

186

# --numpy_seed: Random seed for reproducible splits

187

```

188

189

#### FiftyOne Integration

190

191

Convert COCO datasets to FiftyOne format for advanced visualization and analysis.

192

193

```bash { .api }

194

sahi coco fiftyone \

195

--coco_annotation_file_path annotations.json \

196

--image_dir images/ \

197

--dataset_name "my_dataset" \

198

--launch_fiftyone_app True

199

200

# Parameters:

201

# --coco_annotation_file_path: Path to COCO JSON file

202

# --image_dir: Directory containing images

203

# --dataset_name: Name for FiftyOne dataset

204

# --launch_fiftyone_app: Launch FiftyOne web app (True/False)

205

```

206

207

### System Information Commands

208

209

#### Version Information

210

211

Display SAHI version information.

212

213

```bash { .api }

214

sahi version

215

216

# Output: Shows current SAHI version number

217

```

218

219

#### Environment Information

220

221

Display comprehensive environment and dependency information for debugging.

222

223

```bash { .api }

224

sahi env

225

226

# Output: Shows:

227

# - SAHI version

228

# - Python version

229

# - PyTorch version

230

# - CUDA availability

231

# - System information

232

# - Installed dependencies

233

```

234

235

## Usage Examples

236

237

### Basic Object Detection

238

239

```bash

240

# Simple prediction on single image

241

sahi predict \

242

--model_type ultralytics \

243

--model_path yolov8n.pt \

244

--source image.jpg

245

246

# Batch processing entire directory

247

sahi predict \

248

--model_type ultralytics \

249

--model_path yolov8n.pt \

250

--source images/ \

251

--export_visual True

252

```

253

254

### Advanced Sliced Inference

255

256

```bash

257

# High-resolution satellite imagery

258

sahi predict \

259

--model_type ultralytics \

260

--model_path satellite_model.pt \

261

--source satellite_image.tif \

262

--slice_height 1024 \

263

--slice_width 1024 \

264

--overlap_height_ratio 0.3 \

265

--overlap_width_ratio 0.3 \

266

--confidence_threshold 0.2 \

267

--device cuda:0 \

268

--export_visual True \

269

--project satellite_results \

270

--name high_res_experiment

271

```

272

273

### Video Processing

274

275

```bash

276

# Process video with frame skipping

277

sahi predict \

278

--model_type ultralytics \

279

--model_path yolov8n.pt \

280

--source video.mp4 \

281

--frame_skip_interval 5 \

282

--slice_height 640 \

283

--slice_width 640 \

284

--export_visual True

285

```

286

287

### Dataset Preparation Pipeline

288

289

```bash

290

# 1. Slice large images dataset

291

sahi coco slice \

292

--image_dir original_images/ \

293

--dataset_json_path annotations.json \

294

--slice_height 640 \

295

--slice_width 640 \

296

--output_dir sliced_dataset/

297

298

# 2. Convert to YOLO format for training

299

sahi coco yolo \

300

--coco_annotation_file_path sliced_dataset/sliced_annotations.json \

301

--image_dir sliced_dataset/images/ \

302

--output_dir yolo_training_data/ \

303

--train_split_rate 0.8

304

305

# 3. Create FiftyOne dataset for analysis

306

sahi coco fiftyone \

307

--coco_annotation_file_path sliced_dataset/sliced_annotations.json \

308

--image_dir sliced_dataset/images/ \

309

--dataset_name "sliced_training_data" \

310

--launch_fiftyone_app True

311

```

312

313

### Model Evaluation Workflow

314

315

```bash

316

# 1. Run predictions on test set

317

sahi predict \

318

--model_type ultralytics \

319

--model_path trained_model.pt \

320

--source test_images/ \

321

--export_format coco \

322

--return_dict True \

323

--project evaluation \

324

--name test_predictions

325

326

# 2. Evaluate predictions

327

sahi coco evaluate \

328

--dataset_json_path test_annotations.json \

329

--result_json_path evaluation/test_predictions/predictions.json \

330

--out_dir evaluation_results/ \

331

--classwise True

332

333

# 3. Perform error analysis

334

sahi coco analyse \

335

--dataset_json_path test_annotations.json \

336

--result_json_path evaluation/test_predictions/predictions.json \

337

--out_dir error_analysis/ \

338

--classwise True

339

```

340

341

### Multi-Framework Comparison

342

343

```bash

344

# Test different models on same dataset

345

for model_type in ultralytics mmdet detectron2; do

346

sahi predict \

347

--model_type $model_type \

348

--model_path models/${model_type}_model.pt \

349

--source test_images/ \

350

--project comparison \

351

--name ${model_type}_results \

352

--export_format coco

353

done

354

355

# Evaluate each result

356

for model_type in ultralytics mmdet detectron2; do

357

sahi coco evaluate \

358

--dataset_json_path test_annotations.json \

359

--result_json_path comparison/${model_type}_results/predictions.json \

360

--out_dir comparison/${model_type}_evaluation/

361

done

362

```

363

364

### Environment Debugging

365

366

```bash

367

# Check system configuration

368

sahi env

369

370

# Verify installation

371

sahi version

372

373

# Test basic functionality

374

sahi predict \

375

--model_type ultralytics \

376

--model_path yolov8n.pt \

377

--source test_image.jpg \

378

--device cpu

379

```