0
# Common Types and Data Structures
1
2
Core type definitions, enums, and data structures shared across all Google Cloud Vision AI services. These types provide the foundation for video analytics, streaming, asset management, and application configuration.
3
4
## Core Data Types
5
6
### Resource Management Types
7
8
```python { .api }
9
class OperationMetadata:
10
"""Metadata for long-running operations."""
11
create_time: Timestamp # Operation creation time
12
end_time: Timestamp # Operation completion time
13
target: str # Target resource for operation
14
verb: str # Operation verb (create, update, delete, etc.)
15
status_message: str # Human-readable status message
16
requested_cancellation: bool # Whether cancellation was requested
17
api_version: str # API version used for operation
18
19
class FieldMask:
20
"""Specifies fields to update in update operations."""
21
paths: List[str] # List of field paths to update
22
23
class Timestamp:
24
"""Timestamp representation."""
25
seconds: int # Seconds since Unix epoch
26
nanos: int # Nanoseconds component (0-999,999,999)
27
28
class Duration:
29
"""Duration representation."""
30
seconds: int # Duration in seconds
31
nanos: int # Nanoseconds component
32
33
class Empty:
34
"""Empty response type."""
35
pass
36
```
37
38
### Authentication and Access Types
39
40
```python { .api }
41
class GcsSource:
42
"""Google Cloud Storage source specification."""
43
uris: List[str] # List of GCS URIs
44
45
class StringArray:
46
"""Array of string values for search criteria."""
47
txt_values: List[str] # String values
48
49
class IntArray:
50
"""Array of integer values for search criteria."""
51
int_values: List[int] # Integer values
52
53
class FloatArray:
54
"""Array of float values for search criteria."""
55
float_values: List[float] # Float values
56
```
57
58
## Platform-Specific Types
59
60
### Resource Specifications
61
62
```python { .api }
63
class MachineSpec:
64
"""Machine specification for deployment."""
65
machine_type: str # Machine type (e.g., "n1-standard-4")
66
67
class DedicatedResources:
68
"""Dedicated resource allocation."""
69
machine_spec: MachineSpec # Machine specification
70
min_replica_count: int # Minimum number of replicas
71
max_replica_count: int # Maximum number of replicas
72
autoscaling_metric_specs: List[AutoscalingMetricSpec] # Autoscaling configuration
73
74
class AutoscalingMetricSpec:
75
"""Autoscaling metric specification."""
76
metric_name: str # Name of metric to track
77
# Union field oneof target:
78
target: int # Target value for metric
79
80
class AcceleratorType(Enum):
81
"""Hardware accelerator types."""
82
ACCELERATOR_TYPE_UNSPECIFIED = 0
83
NVIDIA_TESLA_K80 = 1 # NVIDIA Tesla K80
84
NVIDIA_TESLA_P4 = 2 # NVIDIA Tesla P4
85
NVIDIA_TESLA_P100 = 3 # NVIDIA Tesla P100
86
NVIDIA_TESLA_V100 = 4 # NVIDIA Tesla V100
87
NVIDIA_TESLA_T4 = 5 # NVIDIA Tesla T4
88
TPU_V2 = 6 # Cloud TPU v2
89
TPU_V3 = 7 # Cloud TPU v3
90
```
91
92
### ML Model Configuration Types
93
94
```python { .api }
95
class VertexAutoMLVisionConfig:
96
"""Vertex AutoML Vision configuration."""
97
confidence_threshold: float # Confidence threshold for predictions
98
max_predictions: int # Maximum number of predictions
99
100
class VertexAutoMLVideoConfig:
101
"""Vertex AutoML Video configuration."""
102
confidence_threshold: float # Confidence threshold for predictions
103
blocked_labels: List[str] # Labels to block from predictions
104
max_predictions: int # Maximum number of predictions
105
106
class VertexCustomConfig:
107
"""Custom Vertex AI model configuration."""
108
machine_spec: MachineSpec # Machine specification
109
dedicated_resources: DedicatedResources # Resource allocation
110
post_processing_cloud_function: str # Post-processing function
111
112
class GeneralObjectDetectionConfig:
113
"""General object detection configuration."""
114
pass # Configuration for general object detection
115
116
class PersonBlurConfig:
117
"""Person blurring configuration."""
118
person_blur_type: PersonBlurType # Type of blurring to apply
119
faces_only: bool # Whether to blur faces only
120
121
class PersonBlurType(Enum):
122
"""Types of person blurring."""
123
PERSON_BLUR_TYPE_UNSPECIFIED = 0
124
FULL_BODY = 1 # Blur entire person
125
FACE_ONLY = 2 # Blur face only
126
127
class OccupancyCountConfig:
128
"""Occupancy counting configuration."""
129
enable_people_counting: bool # Enable people counting
130
enable_dwelling_time_tracking: bool # Enable dwelling time tracking
131
132
class PersonVehicleDetectionConfig:
133
"""Person and vehicle detection configuration."""
134
enable_people_counting: bool # Enable people counting
135
enable_vehicle_counting: bool # Enable vehicle counting
136
137
class PersonalProtectiveEquipmentDetectionConfig:
138
"""PPE detection configuration."""
139
enable_face_coverage_detection: bool # Detect face coverage
140
enable_head_coverage_detection: bool # Detect head coverage
141
enable_hands_coverage_detection: bool # Detect hand coverage
142
```
143
144
### Output Configuration Types
145
146
```python { .api }
147
class MediaWarehouseConfig:
148
"""Media warehouse output configuration."""
149
corpus: str # Target corpus for output
150
region: str # Region for storage
151
ttl: Duration # Time-to-live for stored data
152
153
class GcsOutputConfig:
154
"""Google Cloud Storage output configuration."""
155
bucket: str # Target GCS bucket
156
reporting_enabled: bool # Enable output reporting
157
158
class BigQueryConfig:
159
"""BigQuery output configuration."""
160
table: str # Target BigQuery table
161
cloud_function_mapping: Dict[str, str] # Function mappings
162
create_default_table_if_not_exists: bool # Create table if missing
163
```
164
165
## Annotation and Prediction Types
166
167
### Geometric Types
168
169
```python { .api }
170
class NormalizedVertex:
171
"""Normalized coordinate point (0.0 to 1.0)."""
172
x: float # X coordinate (0.0 to 1.0)
173
y: float # Y coordinate (0.0 to 1.0)
174
175
class NormalizedPolygon:
176
"""Polygon with normalized coordinates."""
177
normalized_vertices: List[NormalizedVertex] # Polygon vertices
178
179
class NormalizedPolyline:
180
"""Polyline with normalized coordinates."""
181
normalized_vertices: List[NormalizedVertex] # Polyline vertices
182
183
class GeoCoordinate:
184
"""Geographic coordinate."""
185
latitude: float # Latitude in degrees
186
longitude: float # Longitude in degrees
187
```
188
189
### Prediction Result Types
190
191
```python { .api }
192
class ClassificationPredictionResult:
193
"""Classification prediction results."""
194
classifications: List[Classification] # Classification results
195
196
class Classification:
197
"""Individual classification result."""
198
score: float # Confidence score
199
class_name: str # Predicted class name
200
201
class ObjectDetectionPredictionResult:
202
"""Object detection prediction results."""
203
identified_boxes: List[IdentifiedBox] # Detected objects
204
205
class IdentifiedBox:
206
"""Detected object with bounding box."""
207
entity: Entity # Detected entity information
208
normalized_bounding_box: NormalizedBoundingBox # Bounding box coordinates
209
confidence_score: float # Detection confidence
210
211
class Entity:
212
"""Detected entity information."""
213
entity_id: str # Entity identifier
214
label_string: str # Human-readable label
215
216
class NormalizedBoundingBox:
217
"""Normalized bounding box coordinates."""
218
xmin: float # Left edge (0.0 to 1.0)
219
ymin: float # Top edge (0.0 to 1.0)
220
xmax: float # Right edge (0.0 to 1.0)
221
ymax: float # Bottom edge (0.0 to 1.0)
222
223
class VideoActionRecognitionPredictionResult:
224
"""Video action recognition results."""
225
actions: List[ActionRecognition] # Recognized actions
226
227
class ActionRecognition:
228
"""Individual action recognition result."""
229
action_name: str # Name of recognized action
230
confidence: float # Recognition confidence
231
timespan: TimeSpan # Time span of action
232
233
class TimeSpan:
234
"""Time span specification."""
235
start_time_offset: Duration # Start time offset
236
end_time_offset: Duration # End time offset
237
238
class VideoObjectTrackingPredictionResult:
239
"""Video object tracking results."""
240
objects: List[ObjectTracking] # Tracked objects
241
242
class ObjectTracking:
243
"""Individual object tracking result."""
244
entity: Entity # Tracked entity
245
confidence: float # Tracking confidence
246
track_id: int # Unique track identifier
247
normalized_bounding_box: NormalizedBoundingBox # Object location
248
249
class OccupancyCountingPredictionResult:
250
"""Occupancy counting results."""
251
current_count: int # Current occupancy count
252
dwell_time_info: List[DwellTimeInfo] # Dwelling time information
253
254
class DwellTimeInfo:
255
"""Dwelling time information."""
256
track_id: str # Track identifier
257
zone_id: str # Zone identifier
258
dwell_start_time: Timestamp # Dwell start time
259
dwell_end_time: Timestamp # Dwell end time
260
261
class PersonalProtectiveEquipmentDetectionOutput:
262
"""PPE detection results."""
263
current_time: Timestamp # Detection timestamp
264
detected_persons: List[PPEIdentifiedBox] # Detected persons with PPE
265
266
class PPEIdentifiedBox:
267
"""Person detection with PPE information."""
268
box_id: int # Box identifier
269
normalized_bounding_box: NormalizedBoundingBox # Person location
270
confidence_score: float # Detection confidence
271
ppe_entity: List[PPEEntity] # PPE entities detected
272
273
class PPEEntity:
274
"""PPE entity information."""
275
ppe_label_id: int # PPE label identifier
276
ppe_label_string: str # PPE label description
277
ppe_supercategory_label_string: str # PPE category
278
ppe_confidence_score: float # PPE detection confidence
279
```
280
281
## Stream and Event Types
282
283
### Stream Annotation Types
284
285
```python { .api }
286
class StreamAnnotation:
287
"""Individual stream annotation."""
288
id: str # Annotation identifier
289
source_stream: str # Source stream
290
type: StreamAnnotationType # Annotation type
291
# Union field oneof annotation:
292
active_zone_counting_annotation: ActiveZoneCountingAnnotation # Zone counting
293
crossing_line_counting_annotation: CrossingLineCountingAnnotation # Line crossing
294
object_detection_annotation: ObjectDetectionStreamAnnotation # Object detection
295
object_tracking_annotation: ObjectTrackingStreamAnnotation # Object tracking
296
297
class StreamAnnotationType(Enum):
298
"""Types of stream annotations."""
299
STREAM_ANNOTATION_TYPE_UNSPECIFIED = 0
300
ACTIVE_ZONE_COUNTING = 1 # Active zone counting
301
CROSSING_LINE_COUNTING = 2 # Crossing line counting
302
OBJECT_DETECTION = 3 # Object detection
303
OBJECT_TRACKING = 4 # Object tracking
304
305
class ActiveZoneCountingAnnotation:
306
"""Active zone counting annotation."""
307
counting_line_annotations: List[NormalizedPolyline] # Counting lines
308
309
class CrossingLineCountingAnnotation:
310
"""Crossing line counting annotation."""
311
counting_line_annotations: List[NormalizedPolyline] # Counting lines
312
313
class ObjectDetectionStreamAnnotation:
314
"""Object detection stream annotation."""
315
bounding_box: NormalizedBoundingBox # Detection bounding box
316
317
class ObjectTrackingStreamAnnotation:
318
"""Object tracking stream annotation."""
319
track_id: str # Tracking identifier
320
bounding_box: NormalizedBoundingBox # Tracking bounding box
321
```
322
323
### App Platform Event Types
324
325
```python { .api }
326
class AppPlatformEventBody:
327
"""Event body for app platform notifications."""
328
event_message: str # Event message content
329
event_id: str # Event identifier
330
331
class AppPlatformMetadata:
332
"""Metadata for app platform operations."""
333
application: str # Application resource path
334
instance_id: str # Instance identifier
335
node: str # Processing node
336
processor: str # Processor resource path
337
338
class AppPlatformCloudFunctionRequest:
339
"""Request for app platform cloud function."""
340
annotations: List[AppPlatformEventBody] # Event annotations
341
application_metadata: AppPlatformMetadata # Application metadata
342
343
class AppPlatformCloudFunctionResponse:
344
"""Response from app platform cloud function."""
345
annotations: List[StreamAnnotation] # Processed annotations
346
events: List[AppPlatformEventBody] # Generated events
347
```
348
349
## Data Schema and Search Types
350
351
### Schema Definition Types
352
353
```python { .api }
354
class DataSchema:
355
"""Schema definition for structured data."""
356
key: str # Schema key identifier
357
# Union field oneof schema_details:
358
list_config: ListConfig # List configuration
359
360
class ListConfig:
361
"""Configuration for list-type data."""
362
pass # List configuration options
363
364
class FacetGroup:
365
"""Faceted search group."""
366
facet_id: str # Facet identifier
367
facet_values: List[FacetValue] # Facet values
368
369
class FacetValue:
370
"""Individual facet value."""
371
value: str # Facet value
372
selected: bool # Whether value is selected
373
374
class FacetProperty:
375
"""Facet property definition."""
376
fixed_range_bucket_spec: FixedRangeBucketSpec # Fixed range buckets
377
custom_range_bucket_spec: CustomRangeBucketSpec # Custom range buckets
378
datetime_bucket_spec: DateTimeBucketSpec # DateTime buckets
379
mapped_fields: List[str] # Mapped field names
380
381
class FixedRangeBucketSpec:
382
"""Fixed range bucket specification."""
383
bucket_start: float # Bucket start value
384
bucket_granularity: float # Bucket size
385
bucket_count: int # Number of buckets
386
387
class CustomRangeBucketSpec:
388
"""Custom range bucket specification."""
389
endpoints: List[float] # Custom bucket endpoints
390
391
class DateTimeBucketSpec:
392
"""DateTime bucket specification."""
393
granularity: DateTimeBucketSpecGranularity # Time granularity
394
395
class DateTimeBucketSpecGranularity(Enum):
396
"""DateTime bucket granularities."""
397
GRANULARITY_UNSPECIFIED = 0
398
YEAR = 1 # Yearly buckets
399
MONTH = 2 # Monthly buckets
400
DAY = 3 # Daily buckets
401
HOUR = 4 # Hourly buckets
402
```
403
404
### Range and Criteria Types
405
406
```python { .api }
407
class DateTimeRange:
408
"""Date and time range specification."""
409
start_datetime: Timestamp # Range start time
410
end_datetime: Timestamp # Range end time
411
412
class FloatRange:
413
"""Floating point number range."""
414
start: float # Range start value
415
end: float # Range end value
416
417
class IntRange:
418
"""Integer number range."""
419
start: int # Range start value
420
end: int # Range end value
421
422
class BooleanCriteria:
423
"""Boolean search criteria."""
424
value: bool # Boolean value to match
425
426
class FeatureCriteria:
427
"""Feature-based search criteria."""
428
# Union field oneof feature:
429
image_query: ImageQuery # Image-based query
430
text_query: str # Text-based query
431
432
class ImageQuery:
433
"""Image-based search query."""
434
# Union field oneof image:
435
input_image: bytes # Raw image data
436
asset: str # Asset containing reference image
437
```
438
439
## Input/Output Specification Types
440
441
### Processing Node Types
442
443
```python { .api }
444
class InputEdge:
445
"""Input edge connecting processing nodes."""
446
parent_node: str # Parent node name
447
parent_output: str # Parent output name
448
child_input: str # Child input name
449
450
class GraphInputChannelSpec:
451
"""Input channel specification for processing graph."""
452
name: str # Channel name
453
data_type: DataType # Channel data type
454
accepted_data_type_ids: List[str] # Accepted data type IDs
455
required: bool # Whether input is required
456
default_value: AttributeValue # Default value if not provided
457
458
class GraphOutputChannelSpec:
459
"""Output channel specification for processing graph."""
460
name: str # Channel name
461
data_type: DataType # Channel data type
462
463
class InstanceResourceInputBindingSpec:
464
"""Input binding specification for instances."""
465
config_type_url: str # Configuration type URL
466
resource_type_url: str # Resource type URL
467
468
class InstanceResourceOutputBindingSpec:
469
"""Output binding specification for instances."""
470
config_type_url: str # Configuration type URL
471
resource_type_url: str # Resource type URL
472
473
class DataType(Enum):
474
"""Data types for processing channels."""
475
DATA_TYPE_UNSPECIFIED = 0
476
VIDEO = 1 # Video data
477
PROTO = 2 # Protocol buffer data
478
IMAGE = 3 # Image data
479
```
480
481
### Custom Processor Types
482
483
```python { .api }
484
class CustomProcessorSourceInfo:
485
"""Source information for custom processors."""
486
# Union field oneof artifact_path:
487
vertex_model: str # Vertex AI model resource
488
source_type: CustomProcessorSourceInfoSourceType # Source type
489
490
class CustomProcessorSourceInfoSourceType(Enum):
491
"""Source types for custom processors."""
492
SOURCE_TYPE_UNSPECIFIED = 0
493
VERTEX_AUTOML = 1 # Vertex AutoML source
494
VERTEX_CUSTOM = 2 # Vertex Custom source
495
GENERAL_PROCESSOR = 3 # General processor source
496
497
class ApplicationRuntimeInfo:
498
"""Runtime information for applications."""
499
deploy_time: Timestamp # Deployment time
500
# Union field oneof runtime_info:
501
global_output_resources: List[OutputResourceBinding] # Global output resources
502
503
class ApplicationEventDeliveryConfig:
504
"""Event delivery configuration for applications."""
505
channel: str # Delivery channel
506
minimal_delivery_interval: Duration # Minimum delivery interval
507
```
508
509
## Usage Patterns
510
511
### Type Usage Examples
512
513
```python
514
from google.cloud import visionai_v1
515
516
# Example: Creating temporal partition for annotation
517
temporal_partition = visionai_v1.Partition(
518
temporal_partition=visionai_v1.TemporalPartition(
519
start_time=visionai_v1.Timestamp(seconds=1725926400),
520
end_time=visionai_v1.Timestamp(seconds=1725930000)
521
)
522
)
523
524
# Example: Creating search criteria with ranges
525
search_criteria = [
526
visionai_v1.Criteria(
527
date_time_range_criteria=visionai_v1.DateTimeRangeCriteria(
528
date_time_ranges=[
529
visionai_v1.DateTimeRange(
530
start_datetime=visionai_v1.Timestamp(seconds=1725926400),
531
end_datetime=visionai_v1.Timestamp(seconds=1725930000)
532
)
533
]
534
)
535
),
536
visionai_v1.Criteria(
537
float_range_criteria=visionai_v1.FloatRangeCriteria(
538
float_ranges=[
539
visionai_v1.FloatRange(start=0.7, end=1.0) # High confidence only
540
]
541
)
542
)
543
]
544
545
# Example: Creating normalized bounding box
546
bounding_box = visionai_v1.NormalizedBoundingBox(
547
xmin=0.1, # 10% from left
548
ymin=0.2, # 20% from top
549
xmax=0.9, # 90% from left
550
ymax=0.8 # 80% from top
551
)
552
553
# Example: Creating update mask
554
update_mask = visionai_v1.FieldMask(
555
paths=["display_name", "description", "labels"]
556
)
557
```
558
559
## Health Monitoring Types
560
561
```python { .api }
562
class HealthCheckRequest:
563
"""Request for health check operation."""
564
cluster: str # Cluster resource path to check
565
566
class HealthCheckResponse:
567
"""Response containing health check results."""
568
cluster_info: ClusterInfo # Detailed cluster health information
569
570
class ClusterInfo:
571
"""Cluster health and status information."""
572
cluster_name: str # Name of the cluster
573
cluster_id: str # Unique cluster identifier
574
# Additional health status details and metrics
575
```
576
577
These types form the foundation for all operations across the Google Cloud Vision AI package, providing consistent interfaces for video analytics, asset management, streaming operations, and application configuration.