0
# Client Operations
1
2
Core bucket and object operations using the `Minio` class. These operations form the foundation of object storage management, providing CRUD functionality for buckets and objects along with metadata management and basic configuration.
3
4
## Capabilities
5
6
### Bucket Operations
7
8
Create, list, check existence, and remove buckets. Bucket operations also include policy management and metadata configuration.
9
10
```python { .api }
11
def make_bucket(
12
self,
13
bucket_name: str,
14
location: str | None = None,
15
object_lock: bool = False
16
) -> None:
17
"""
18
Create a new bucket.
19
20
Args:
21
bucket_name: Name of the bucket to create
22
location: AWS region for the bucket (optional)
23
object_lock: Enable object lock for compliance (default: False)
24
25
Raises:
26
S3Error: If bucket creation fails
27
"""
28
29
def list_buckets(self) -> list[Bucket]:
30
"""
31
List all buckets accessible by current credentials.
32
33
Returns:
34
List of Bucket objects containing name and creation date
35
36
Raises:
37
S3Error: If listing fails
38
"""
39
40
def bucket_exists(self, bucket_name: str) -> bool:
41
"""
42
Check if a bucket exists and is accessible.
43
44
Args:
45
bucket_name: Name of the bucket to check
46
47
Returns:
48
True if bucket exists and is accessible, False otherwise
49
50
Raises:
51
S3Error: If check operation fails (other than NotFound)
52
"""
53
54
def remove_bucket(self, bucket_name: str) -> None:
55
"""
56
Remove an empty bucket.
57
58
Args:
59
bucket_name: Name of the bucket to remove
60
61
Raises:
62
S3Error: If bucket is not empty or removal fails
63
"""
64
65
def get_bucket_policy(self, bucket_name: str) -> str:
66
"""
67
Get bucket policy configuration as JSON string.
68
69
Args:
70
bucket_name: Name of the bucket
71
72
Returns:
73
JSON string containing bucket policy
74
75
Raises:
76
S3Error: If policy retrieval fails
77
"""
78
79
def set_bucket_policy(self, bucket_name: str, policy: str) -> None:
80
"""
81
Set bucket policy configuration.
82
83
Args:
84
bucket_name: Name of the bucket
85
policy: Policy configuration as JSON string
86
87
Raises:
88
S3Error: If policy setting fails
89
"""
90
91
def delete_bucket_policy(self, bucket_name: str) -> None:
92
"""
93
Remove bucket policy configuration.
94
95
Args:
96
bucket_name: Name of the bucket
97
98
Raises:
99
S3Error: If policy deletion fails
100
"""
101
```
102
103
### Object Upload Operations
104
105
Upload objects to buckets with various options for metadata, encryption, progress tracking, and multipart uploads.
106
107
```python { .api }
108
def put_object(
109
self,
110
bucket_name: str,
111
object_name: str,
112
data: io.IOBase,
113
length: int = -1,
114
content_type: str = "application/octet-stream",
115
metadata: dict[str, str] | None = None,
116
sse: Sse | None = None,
117
progress: ProgressType | None = None,
118
part_size: int = 0,
119
num_parallel_uploads: int = 3,
120
tags: Tags | None = None,
121
retention: Retention | None = None,
122
legal_hold: bool = False
123
) -> ObjectWriteResult:
124
"""
125
Upload object data to bucket from a file-like object.
126
127
Args:
128
bucket_name: Name of the destination bucket
129
object_name: Name of the object to create
130
data: File-like object containing data to upload
131
length: Size of data (-1 for unknown size, requires data.seek support)
132
content_type: MIME type of the object
133
metadata: User-defined metadata key-value pairs
134
sse: Server-side encryption configuration
135
progress: Progress callback function
136
part_size: Multipart upload part size (0 for auto)
137
num_parallel_uploads: Number of parallel uploads for multipart
138
tags: Object tags
139
retention: Object retention configuration
140
legal_hold: Enable legal hold on object
141
142
Returns:
143
ObjectWriteResult with upload details
144
145
Raises:
146
S3Error: If upload fails
147
"""
148
149
def fput_object(
150
self,
151
bucket_name: str,
152
object_name: str,
153
file_path: str,
154
content_type: str | None = None,
155
metadata: dict[str, str] | None = None,
156
sse: Sse | None = None,
157
progress: ProgressType | None = None,
158
part_size: int = 0,
159
num_parallel_uploads: int = 3,
160
tags: Tags | None = None,
161
retention: Retention | None = None,
162
legal_hold: bool = False
163
) -> ObjectWriteResult:
164
"""
165
Upload object data to bucket from a file path.
166
167
Args:
168
bucket_name: Name of the destination bucket
169
object_name: Name of the object to create
170
file_path: Path to file to upload
171
content_type: MIME type (auto-detected if None)
172
metadata: User-defined metadata key-value pairs
173
sse: Server-side encryption configuration
174
progress: Progress callback function
175
part_size: Multipart upload part size (0 for auto)
176
num_parallel_uploads: Number of parallel uploads for multipart
177
tags: Object tags
178
retention: Object retention configuration
179
legal_hold: Enable legal hold on object
180
181
Returns:
182
ObjectWriteResult with upload details
183
184
Raises:
185
S3Error: If upload fails
186
"""
187
```
188
189
### Object Download Operations
190
191
Download objects from buckets with support for range requests, encryption, and versioning.
192
193
```python { .api }
194
def get_object(
195
self,
196
bucket_name: str,
197
object_name: str,
198
offset: int = 0,
199
length: int = 0,
200
request_headers: dict[str, str] | None = None,
201
ssec: SseCustomerKey | None = None,
202
version_id: str | None = None,
203
extra_query_params: dict[str, str] | None = None
204
) -> urllib3.HTTPResponse:
205
"""
206
Get object data from bucket as HTTP response stream.
207
208
Args:
209
bucket_name: Name of the source bucket
210
object_name: Name of the object to retrieve
211
offset: Start byte position for range request (0 for beginning)
212
length: Number of bytes to read (0 for entire object)
213
request_headers: Additional HTTP headers
214
ssec: Server-side encryption key for encrypted objects
215
version_id: Specific version to retrieve
216
extra_query_params: Additional query parameters
217
218
Returns:
219
urllib3.HTTPResponse object for streaming data
220
221
Raises:
222
S3Error: If download fails
223
"""
224
225
def fget_object(
226
self,
227
bucket_name: str,
228
object_name: str,
229
file_path: str,
230
request_headers: dict[str, str] | None = None,
231
ssec: SseCustomerKey | None = None,
232
version_id: str | None = None,
233
extra_query_params: dict[str, str] | None = None,
234
tmp_file_path: str | None = None
235
) -> None:
236
"""
237
Download object data from bucket to a file path.
238
239
Args:
240
bucket_name: Name of the source bucket
241
object_name: Name of the object to retrieve
242
file_path: Path where object will be downloaded
243
request_headers: Additional HTTP headers
244
ssec: Server-side encryption key for encrypted objects
245
version_id: Specific version to retrieve
246
extra_query_params: Additional query parameters
247
tmp_file_path: Temporary file path for atomic download
248
249
Raises:
250
S3Error: If download fails
251
"""
252
```
253
254
### Object Management Operations
255
256
Manage existing objects including copying, metadata retrieval, and deletion.
257
258
```python { .api }
259
def stat_object(
260
self,
261
bucket_name: str,
262
object_name: str,
263
ssec: SseCustomerKey | None = None,
264
version_id: str | None = None,
265
extra_query_params: dict[str, str] | None = None
266
) -> Object:
267
"""
268
Get object metadata without downloading the object.
269
270
Args:
271
bucket_name: Name of the bucket
272
object_name: Name of the object
273
ssec: Server-side encryption key for encrypted objects
274
version_id: Specific version to retrieve metadata for
275
extra_query_params: Additional query parameters
276
277
Returns:
278
Object containing metadata information
279
280
Raises:
281
S3Error: If metadata retrieval fails
282
"""
283
284
def copy_object(
285
self,
286
bucket_name: str,
287
object_name: str,
288
source: CopySource,
289
sse: Sse | None = None,
290
metadata: dict[str, str] | None = None,
291
tags: Tags | None = None,
292
retention: Retention | None = None,
293
legal_hold: bool = False
294
) -> ObjectWriteResult:
295
"""
296
Copy object from another source object.
297
298
Args:
299
bucket_name: Name of the destination bucket
300
object_name: Name of the destination object
301
source: CopySource specifying source bucket and object
302
sse: Server-side encryption for destination
303
metadata: Metadata for destination object (replaces source metadata)
304
tags: Tags for destination object
305
retention: Retention configuration for destination
306
legal_hold: Enable legal hold on destination object
307
308
Returns:
309
ObjectWriteResult with copy operation details
310
311
Raises:
312
S3Error: If copy operation fails
313
"""
314
315
def compose_object(
316
self,
317
bucket_name: str,
318
object_name: str,
319
sources: list[ComposeSource],
320
sse: Sse | None = None,
321
metadata: dict[str, str] | None = None,
322
tags: Tags | None = None,
323
retention: Retention | None = None,
324
legal_hold: bool = False
325
) -> ObjectWriteResult:
326
"""
327
Create object by composing multiple source objects.
328
329
Args:
330
bucket_name: Name of the destination bucket
331
object_name: Name of the destination object
332
sources: List of ComposeSource objects to combine
333
sse: Server-side encryption for destination
334
metadata: Metadata for destination object
335
tags: Tags for destination object
336
retention: Retention configuration for destination
337
legal_hold: Enable legal hold on destination object
338
339
Returns:
340
ObjectWriteResult with compose operation details
341
342
Raises:
343
S3Error: If compose operation fails
344
"""
345
346
def remove_object(
347
self,
348
bucket_name: str,
349
object_name: str,
350
version_id: str | None = None
351
) -> None:
352
"""
353
Remove an object from bucket.
354
355
Args:
356
bucket_name: Name of the bucket
357
object_name: Name of the object to remove
358
version_id: Specific version to remove (for versioned buckets)
359
360
Raises:
361
S3Error: If removal fails
362
"""
363
364
def remove_objects(
365
self,
366
bucket_name: str,
367
delete_object_list: Iterable[DeleteObject],
368
bypass_governance_mode: bool = False
369
) -> Iterable[DeleteResult]:
370
"""
371
Remove multiple objects from bucket in batch operation.
372
373
Args:
374
bucket_name: Name of the bucket
375
delete_object_list: Iterable of DeleteObject specifications
376
bypass_governance_mode: Bypass governance retention mode
377
378
Returns:
379
Iterable of DeleteResult objects
380
381
Raises:
382
S3Error: If batch removal fails
383
"""
384
385
def prompt_object(
386
self,
387
bucket_name: str,
388
object_name: str,
389
prompt: str,
390
lambda_arn: str | None = None,
391
request_headers: dict[str, str] | None = None,
392
ssec: SseCustomerKey | None = None,
393
version_id: str | None = None,
394
**kwargs: Any
395
) -> urllib3.HTTPResponse:
396
"""
397
Prompt an object using natural language through AI model.
398
399
Args:
400
bucket_name: Name of the bucket
401
object_name: Name of the object to prompt
402
prompt: Natural language prompt to interact with AI model
403
lambda_arn: Lambda ARN to use for prompt processing (optional)
404
request_headers: Additional HTTP headers
405
ssec: Server-side encryption key for encrypted objects
406
version_id: Specific version to prompt
407
**kwargs: Extra parameters for advanced usage
408
409
Returns:
410
urllib3.HTTPResponse object with AI model response
411
412
Raises:
413
S3Error: If prompt operation fails
414
"""
415
```
416
417
### Object Listing Operations
418
419
List and iterate through objects in buckets with filtering and pagination support.
420
421
```python { .api }
422
def list_objects(
423
self,
424
bucket_name: str,
425
prefix: str | None = None,
426
recursive: bool = True,
427
start_after: str | None = None,
428
include_user_metadata: bool = False,
429
include_version: bool = False,
430
use_api_v1: bool = False,
431
max_keys: int = 1000
432
) -> Iterable[Object]:
433
"""
434
List objects in a bucket.
435
436
Args:
437
bucket_name: Name of the bucket
438
prefix: Filter objects by key prefix
439
recursive: List objects recursively (True) or only top-level (False)
440
start_after: Start listing after this key name
441
include_user_metadata: Include user-defined metadata in results
442
include_version: Include version information for versioned buckets
443
use_api_v1: Use S3 API v1 (for legacy compatibility)
444
max_keys: Maximum objects returned per request
445
446
Returns:
447
Iterable of Object instances
448
449
Raises:
450
S3Error: If listing fails
451
"""
452
453
def append_object(
454
self,
455
bucket_name: str,
456
object_name: str,
457
data: BinaryIO,
458
length: int,
459
chunk_size: int | None = None,
460
progress: ProgressType | None = None,
461
extra_headers: dict[str, str] | None = None
462
) -> ObjectWriteResult:
463
"""
464
Append data to an existing object in a bucket.
465
466
Args:
467
bucket_name: Name of the bucket
468
object_name: Name of the existing object to append to
469
data: Binary data stream to append
470
length: Size of data to append
471
chunk_size: Chunk size for optimized uploads
472
progress: Progress callback function
473
extra_headers: Additional HTTP headers
474
475
Returns:
476
ObjectWriteResult with append operation details
477
478
Raises:
479
S3Error: If append operation fails
480
"""
481
```
482
483
### Tags and Metadata Operations
484
485
Manage object and bucket tags for organization and billing.
486
487
```python { .api }
488
def set_bucket_tags(self, bucket_name: str, tags: Tags) -> None:
489
"""
490
Set tags on a bucket.
491
492
Args:
493
bucket_name: Name of the bucket
494
tags: Tags object containing key-value pairs
495
496
Raises:
497
S3Error: If tag setting fails
498
"""
499
500
def get_bucket_tags(self, bucket_name: str) -> Tags | None:
501
"""
502
Get tags from a bucket.
503
504
Args:
505
bucket_name: Name of the bucket
506
507
Returns:
508
Tags object or None if no tags exist
509
510
Raises:
511
S3Error: If tag retrieval fails
512
"""
513
514
def delete_bucket_tags(self, bucket_name: str) -> None:
515
"""
516
Remove all tags from a bucket.
517
518
Args:
519
bucket_name: Name of the bucket
520
521
Raises:
522
S3Error: If tag deletion fails
523
"""
524
525
def set_object_tags(
526
self,
527
bucket_name: str,
528
object_name: str,
529
tags: Tags,
530
version_id: str | None = None
531
) -> None:
532
"""
533
Set tags on an object.
534
535
Args:
536
bucket_name: Name of the bucket
537
object_name: Name of the object
538
tags: Tags object containing key-value pairs
539
version_id: Specific version to tag (for versioned objects)
540
541
Raises:
542
S3Error: If tag setting fails
543
"""
544
545
def get_object_tags(
546
self,
547
bucket_name: str,
548
object_name: str,
549
version_id: str | None = None
550
) -> Tags | None:
551
"""
552
Get tags from an object.
553
554
Args:
555
bucket_name: Name of the bucket
556
object_name: Name of the object
557
version_id: Specific version to get tags from
558
559
Returns:
560
Tags object or None if no tags exist
561
562
Raises:
563
S3Error: If tag retrieval fails
564
"""
565
566
def delete_object_tags(
567
self,
568
bucket_name: str,
569
object_name: str,
570
version_id: str | None = None
571
) -> None:
572
"""
573
Remove all tags from an object.
574
575
Args:
576
bucket_name: Name of the bucket
577
object_name: Name of the object
578
version_id: Specific version to remove tags from
579
580
Raises:
581
S3Error: If tag deletion fails
582
"""
583
```
584
585
### Legal Hold Operations
586
587
Manage legal hold status on objects for compliance and regulatory requirements.
588
589
```python { .api }
590
def enable_object_legal_hold(
591
self,
592
bucket_name: str,
593
object_name: str,
594
version_id: str | None = None
595
) -> None:
596
"""
597
Enable legal hold on an object.
598
599
Args:
600
bucket_name: Name of the bucket
601
object_name: Name of the object
602
version_id: Specific version to enable legal hold on
603
604
Raises:
605
S3Error: If legal hold enable fails
606
"""
607
608
def disable_object_legal_hold(
609
self,
610
bucket_name: str,
611
object_name: str,
612
version_id: str | None = None
613
) -> None:
614
"""
615
Disable legal hold on an object.
616
617
Args:
618
bucket_name: Name of the bucket
619
object_name: Name of the object
620
version_id: Specific version to disable legal hold on
621
622
Raises:
623
S3Error: If legal hold disable fails
624
"""
625
626
def is_object_legal_hold_enabled(
627
self,
628
bucket_name: str,
629
object_name: str,
630
version_id: str | None = None
631
) -> bool:
632
"""
633
Check if legal hold is enabled on an object.
634
635
Args:
636
bucket_name: Name of the bucket
637
object_name: Name of the object
638
version_id: Specific version to check
639
640
Returns:
641
True if legal hold is enabled, False otherwise
642
643
Raises:
644
S3Error: If legal hold status check fails
645
"""
646
```
647
648
## Types
649
650
### Bucket and Object Types
651
652
```python { .api }
653
class Bucket:
654
"""Container for bucket information."""
655
def __init__(self, name: str, creation_date: datetime.datetime | None = None) -> None: ...
656
name: str
657
creation_date: datetime.datetime | None
658
659
class Object:
660
"""Container for object information and metadata."""
661
bucket_name: str | None
662
object_name: str | None
663
last_modified: datetime.datetime | None
664
etag: str | None
665
size: int | None
666
content_type: str | None
667
is_dir: bool
668
version_id: str | None
669
is_latest: bool
670
is_delete_marker: bool
671
storage_class: str | None
672
owner_id: str | None
673
owner_name: str | None
674
tags: Tags | None
675
676
class ObjectWriteResult:
677
"""Result container for object write operations."""
678
def __init__(
679
self,
680
bucket_name: str,
681
object_name: str,
682
etag: str,
683
version_id: str | None = None,
684
location: str | None = None
685
) -> None: ...
686
bucket_name: str
687
object_name: str
688
etag: str
689
version_id: str | None
690
location: str | None
691
```
692
693
### Delete Operation Types
694
695
```python { .api }
696
class DeleteObject:
697
"""Specification for object deletion."""
698
def __init__(self, name: str, version_id: str | None = None) -> None: ...
699
name: str
700
version_id: str | None
701
702
class DeleteResult:
703
"""Result of batch delete operation."""
704
deleted_objects: list[DeleteObject]
705
error_objects: list[DeleteError]
706
707
class DeleteError:
708
"""Error information for failed deletions."""
709
code: str
710
message: str
711
object_name: str
712
version_id: str | None
713
```
714
715
## Usage Examples
716
717
### Basic Bucket and Object Operations
718
719
```python
720
from minio import Minio
721
from minio.error import S3Error
722
723
client = Minio("localhost:9000", "minio", "minio123")
724
725
# Create bucket
726
try:
727
client.make_bucket("my-photos")
728
print("Bucket created successfully")
729
except S3Error as e:
730
print(f"Error: {e}")
731
732
# Upload file
733
try:
734
result = client.fput_object(
735
"my-photos",
736
"vacation/beach.jpg",
737
"/home/user/photos/beach.jpg",
738
content_type="image/jpeg"
739
)
740
print(f"Upload successful: {result.etag}")
741
except S3Error as e:
742
print(f"Upload failed: {e}")
743
744
# Download file
745
try:
746
client.fget_object(
747
"my-photos",
748
"vacation/beach.jpg",
749
"/tmp/downloaded-beach.jpg"
750
)
751
print("Download successful")
752
except S3Error as e:
753
print(f"Download failed: {e}")
754
```
755
756
### Batch Operations
757
758
```python
759
from minio.commonconfig import Tags
760
from minio.deleteobjects import DeleteObject
761
762
# Set tags on multiple objects
763
bucket_tags = Tags.new_bucket_tags()
764
bucket_tags["Environment"] = "Production"
765
bucket_tags["Team"] = "DevOps"
766
767
client.set_bucket_tags("my-bucket", bucket_tags)
768
769
# Batch delete objects
770
delete_objects = [
771
DeleteObject("old-file1.txt"),
772
DeleteObject("old-file2.txt"),
773
DeleteObject("archived/old-file3.txt")
774
]
775
776
for result in client.remove_objects("my-bucket", delete_objects):
777
if result.error_objects:
778
for error in result.error_objects:
779
print(f"Failed to delete {error.object_name}: {error.message}")
780
for obj in result.deleted_objects:
781
print(f"Successfully deleted: {obj.name}")
782
```