or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

authentication.mddataset-versions.mdhigh-level-utilities.mdindex.mdmodel-inference.mdproject-operations.mdworkspace-management.md

project-operations.mddocs/

0

# Project Operations

1

2

Project-level operations including version management, training, image uploads, annotation handling, and search functionality. Projects represent individual computer vision tasks with their datasets, annotations, and trained models.

3

4

## Capabilities

5

6

### Project Class

7

8

Main interface for project-level operations and dataset management.

9

10

```python { .api }

11

class Project:

12

def __init__(self, api_key: str, a_project: dict, model_format: Optional[str] = None):

13

"""

14

Initialize project object.

15

16

Parameters:

17

- api_key: str - Roboflow API key

18

- a_project: dict - Project information from API

19

- model_format: str, optional - Preferred model format

20

"""

21

```

22

23

### Version Management

24

25

Functions for managing dataset versions within a project.

26

27

```python { .api }

28

def get_version_information(self):

29

"""

30

Get information about all versions in the project.

31

32

Returns:

33

dict - Version information including counts and metadata

34

"""

35

36

def list_versions(self):

37

"""

38

List all versions in the project.

39

40

Returns:

41

list - Version information dictionaries

42

"""

43

44

def versions(self):

45

"""

46

Get list of version objects.

47

48

Returns:

49

list - List of Version objects for the project

50

"""

51

52

def version(self, version_number: int, local: Optional[str] = None):

53

"""

54

Access a specific version by number.

55

56

Parameters:

57

- version_number: int - Version number to access

58

- local: str, optional - Local path for version data

59

60

Returns:

61

Version object for the specified version

62

"""

63

64

def generate_version(self, settings):

65

"""

66

Create a new version with specified settings.

67

68

Parameters:

69

- settings: dict - Version generation settings including augmentation options

70

71

Returns:

72

Version object for the newly created version

73

"""

74

```

75

76

### Model Training

77

78

Train machine learning models on project datasets.

79

80

```python { .api }

81

def train(self, new_version_settings=None, speed=None, checkpoint=None, plot_in_notebook=False):

82

"""

83

Train a model on the project's latest version or create a new version with settings.

84

85

Parameters:

86

- new_version_settings: dict, optional - Settings for creating a new version before training

87

- speed: str, optional - Training speed ("fast" for free tier, "accurate" for paid tier)

88

- checkpoint: str, optional - Checkpoint to resume training from

89

- plot_in_notebook: bool - Whether to display training plots in notebook (default: False)

90

91

Returns:

92

Training job information and status

93

"""

94

```

95

96

### Image Upload

97

98

Upload images to the project with optional annotations.

99

100

```python { .api }

101

def check_valid_image(self, image_path: str) -> bool:

102

"""

103

Validate if an image file is in an accepted format.

104

105

Parameters:

106

- image_path: str - Path to image file

107

108

Returns:

109

bool - True if image format is valid, False otherwise

110

"""

111

112

def upload_image(self, image_path, hosted_image=False, image_id=None, split="train", batch_name=None, tag_names=[], inference=None, overwrite=False):

113

"""

114

Upload an image to the project.

115

116

Parameters:

117

- image_path: str - Path to image file

118

- hosted_image: bool - Whether image is hosted externally (default: False)

119

- image_id: str, optional - Custom ID for the image

120

- split: str - Dataset split ("train", "valid", "test")

121

- batch_name: str, optional - Batch name for organization

122

- tag_names: list[str] - Tags to apply to the image

123

- inference: dict, optional - Inference results to attach

124

- overwrite: bool - Whether to overwrite existing image (default: False)

125

126

Returns:

127

dict - Upload response with image information

128

"""

129

130

def upload(self, image_path, annotation_path=None, hosted_image=False, image_id=None, is_prediction=False, prediction_confidence=None, prediction_classes=None, bbox=None, polygon=None, keypoints=None, is_duplicate=False, batch_name=None, tag_names=[], overwrite=False):

131

"""

132

Upload image with annotation data.

133

134

Parameters:

135

- image_path: str - Path to image file

136

- annotation_path: str, optional - Path to annotation file

137

- hosted_image: bool - Whether image is hosted externally

138

- image_id: str, optional - Custom ID for the image

139

- is_prediction: bool - Whether annotation is from prediction

140

- prediction_confidence: float, optional - Confidence score for predictions

141

- prediction_classes: list, optional - Predicted class names

142

- bbox: list, optional - Bounding box coordinates

143

- polygon: list, optional - Polygon coordinates for segmentation

144

- keypoints: list, optional - Keypoint coordinates

145

- is_duplicate: bool - Whether image is a duplicate

146

- batch_name: str, optional - Batch name for organization

147

- tag_names: list[str] - Tags to apply to the image

148

- overwrite: bool - Whether to overwrite existing data

149

150

Returns:

151

dict - Upload response with image and annotation information

152

"""

153

```

154

155

### Annotation Management

156

157

Manage annotations separately from image uploads.

158

159

```python { .api }

160

def save_annotation(self, image_id, annotation_path=None, is_prediction=False, prediction_confidence=None, prediction_classes=None, image_width=None, image_height=None, overwrite=False):

161

"""

162

Save annotation for an existing image.

163

164

Parameters:

165

- image_id: str - ID of the target image

166

- annotation_path: str, optional - Path to annotation file

167

- is_prediction: bool - Whether annotation is from prediction

168

- prediction_confidence: float, optional - Confidence score

169

- prediction_classes: list, optional - Predicted class names

170

- image_width: int, optional - Image width for coordinate normalization

171

- image_height: int, optional - Image height for coordinate normalization

172

- overwrite: bool - Whether to overwrite existing annotation

173

174

Returns:

175

dict - Annotation save response

176

"""

177

178

def single_upload(self, image_path, annotation_path=None, hosted_image=False, image_id=None, split="train", is_prediction=False, prediction_confidence=None, prediction_classes=None, batch_name=None, tag_names=[], inference=None, overwrite=False):

179

"""

180

Upload single image with annotation in one operation.

181

182

Parameters:

183

- image_path: str - Path to image file

184

- annotation_path: str, optional - Path to annotation file

185

- hosted_image: bool - Whether image is hosted externally

186

- image_id: str, optional - Custom ID for the image

187

- split: str - Dataset split ("train", "valid", "test")

188

- is_prediction: bool - Whether annotation is from prediction

189

- prediction_confidence: float, optional - Confidence score

190

- prediction_classes: list, optional - Predicted class names

191

- batch_name: str, optional - Batch name for organization

192

- tag_names: list[str] - Tags to apply to the image

193

- inference: dict, optional - Inference results to attach

194

- overwrite: bool - Whether to overwrite existing data

195

196

Returns:

197

dict - Combined upload response

198

"""

199

```

200

201

### Image Search

202

203

Search and filter images within the project.

204

205

```python { .api }

206

def search(self, query="", stroke_width=1, limit=100, offset=0, sort_by="created", sort_order="desc", fields=["id", "created", "name", "labels"]):

207

"""

208

Search images in the project with filtering options.

209

210

Parameters:

211

- query: str - Search query string

212

- stroke_width: int - Visualization stroke width

213

- limit: int - Maximum number of results (default: 100)

214

- offset: int - Result offset for pagination (default: 0)

215

- sort_by: str - Field to sort by ("created", "name", etc.)

216

- sort_order: str - Sort order ("asc", "desc")

217

- fields: list[str] - Fields to return in results

218

219

Returns:

220

dict - Search results with image metadata

221

"""

222

223

def search_all(self, query="", stroke_width=1, sort_by="created", sort_order="desc", fields=["id", "created", "name", "labels"]):

224

"""

225

Search all images in the project without pagination limits.

226

227

Parameters:

228

- query: str - Search query string

229

- stroke_width: int - Visualization stroke width

230

- sort_by: str - Field to sort by

231

- sort_order: str - Sort order ("asc", "desc")

232

- fields: list[str] - Fields to return in results

233

234

Returns:

235

dict - Complete search results

236

"""

237

```

238

239

## Usage Examples

240

241

### Basic Project Operations

242

243

```python

244

import roboflow

245

246

rf = roboflow.Roboflow(api_key="your_api_key")

247

project = rf.workspace().project("my-project")

248

249

# Get project versions

250

versions = project.versions()

251

print(f"Project has {len(versions)} versions")

252

253

# Access specific version

254

version = project.version(1)

255

256

# Create new version with augmentations

257

new_version = project.generate_version({

258

"preprocessing": {"auto-orient": True, "resize": [416, 416]},

259

"augmentation": {"flip": "horizontal", "rotate": 15}

260

})

261

```

262

263

### Training Models

264

265

```python

266

# Train with default settings

267

model = project.train()

268

269

# Train with custom settings

270

model = project.train(

271

speed="medium",

272

model_type="yolov8n",

273

epochs=100,

274

batch_size=16,

275

plot_in_notebook=True

276

)

277

```

278

279

### Image and Annotation Upload

280

281

```python

282

# Simple image upload

283

response = project.upload_image(

284

image_path="/path/to/image.jpg",

285

split="train",

286

batch_name="My Upload Batch"

287

)

288

289

# Upload with annotation

290

response = project.upload(

291

image_path="/path/to/image.jpg",

292

annotation_path="/path/to/annotation.txt",

293

batch_name="Annotated Upload"

294

)

295

296

# Bulk upload with annotations

297

import os

298

for image_file in os.listdir("/path/to/images"):

299

if image_file.endswith('.jpg'):

300

image_path = f"/path/to/images/{image_file}"

301

annotation_path = f"/path/to/labels/{image_file.replace('.jpg', '.txt')}"

302

303

project.single_upload(

304

image_path=image_path,

305

annotation_path=annotation_path,

306

batch_name="Bulk Upload"

307

)

308

```

309

310

### Image Management

311

312

Retrieve and manage individual images and their details.

313

314

```python { .api }

315

def image(self, image_id: str):

316

"""

317

Get detailed information about a specific image.

318

319

Parameters:

320

- image_id: str - Unique identifier for the image

321

322

Returns:

323

dict - Image details including metadata, annotations, and URLs

324

"""

325

```

326

327

### Batch Management

328

329

Manage upload batches for organizing and tracking groups of images.

330

331

```python { .api }

332

def get_batches(self):

333

"""

334

Get all batches associated with the project.

335

336

Returns:

337

dict - List of batches with their metadata and status

338

"""

339

340

def get_batch(self, batch_id: str):

341

"""

342

Get detailed information about a specific batch.

343

344

Parameters:

345

- batch_id: str - Unique identifier for the batch

346

347

Returns:

348

dict - Batch details including images, status, and metadata

349

"""

350

```

351

352

### Annotation Jobs

353

354

Create and manage annotation jobs for labeling workflows.

355

356

```python { .api }

357

def create_annotation_job(self, batch_id: str, annotator_email: str):

358

"""

359

Create an annotation job for a specific batch.

360

361

Parameters:

362

- batch_id: str - Batch to be annotated

363

- annotator_email: str - Email of the annotator

364

365

Returns:

366

dict - Annotation job details and assignment information

367

"""

368

```

369

370

### Image Search and Management

371

372

```python

373

# Get specific image details

374

image_info = project.image("image_12345")

375

print(f"Image size: {image_info['width']}x{image_info['height']}")

376

377

# Get all batches

378

batches = project.get_batches()

379

for batch in batches['batches']:

380

print(f"Batch: {batch['name']} - {batch['image_count']} images")

381

382

# Search for images with specific labels

383

results = project.search(

384

query="person car",

385

limit=50,

386

sort_by="created",

387

sort_order="desc"

388

)

389

390

# Get all images for analysis

391

all_images = project.search_all(

392

fields=["id", "name", "labels", "created", "width", "height"]

393

)

394

```

395

396

## Supported Image Formats

397

398

The project accepts the following image formats:

399

- JPEG (`.jpg`, `.jpeg`)

400

- PNG (`.png`)

401

- BMP (`.bmp`)

402

- WebP (`.webp`)

403

- TIFF (`.tiff`, `.tif`)

404

- AVIF (`.avif`)

405

- HEIC (`.heic`)

406

407

## Error Handling

408

409

Project operations can raise several types of exceptions:

410

411

```python

412

from roboflow.adapters.rfapi import ImageUploadError, AnnotationSaveError

413

414

try:

415

project.upload_image("/invalid/path.jpg")

416

except ImageUploadError as e:

417

print(f"Image upload failed: {e}")

418

419

try:

420

project.save_annotation("invalid_id", "/path/to/annotation.txt")

421

except AnnotationSaveError as e:

422

print(f"Annotation save failed: {e}")

423

```