tessl install tessl/golang-cloud-google-com--go--logging@1.13.0Cloud Logging client library for Go that enables writing log entries to Google Cloud Logging service with buffered asynchronous and synchronous logging capabilities.
This document describes the architecture and design of the cloud.google.com/go/logging package, explaining how the different components work together and when to use each one.
The Cloud Logging library is organized into three main packages:
cloud.google.com/go/loggingPurpose: Writing log entries to Cloud Logging
Primary Use Case: Application logging, metrics collection, and structured logging
Key Components:
Client - Manages connection to Cloud Logging serviceLogger - Writes log entries to a specific logEntry - Represents a single log entry with payload, severity, metadataDesign Philosophy:
cloud.google.com/go/logging/logadminPurpose: Reading logs and managing logging infrastructure
Primary Use Case: Log analysis, debugging, infrastructure management, compliance
Key Components:
Client - Separate admin client with read/admin permissionsSink - Export logs to external destinations (Cloud Storage, BigQuery, Pub/Sub)Metric - Create logs-based metrics from log dataDesign Philosophy:
cloud.google.com/go/logging/apiv2 (Advanced)Purpose: Direct gRPC access to Cloud Logging API
Primary Use Case: Advanced scenarios requiring fine-grained control
Note: This package contains auto-generated gRPC client code. Most users should use the main logging and logadmin packages instead. The apiv2 package is only needed for:
The main logging package internally uses apiv2 clients but provides a much more ergonomic interface.
The main logging package uses sophisticated buffering to achieve high throughput:
Buffering Mechanism:
Logger.Log(entry) immediately adds entry to an in-memory buffer and returnsConcurrency:
ConcurrentWriteLimit optionBuffer Management:
BufferedByteLimit controls total memory used for buffering (default: 1 GiB)Logger.Flush() or automatic on Client.Close()Synchronous Alternative:
Logger.LogSync() bypasses buffering entirelyThe library automatically detects the appropriate monitored resource based on the runtime environment:
Detection Order:
gce_instance resourcek8s_container resourcegae_app resourcecloud_run_revision resourcecloud_function resourceglobal resource typeOverride Detection:
CommonResource() logger option to specify a custom monitored resourceEntry.Resource fieldErrors in asynchronous logging are handled via callback:
Client.OnError Callback:
client.OnError = func(err error) {
// Called when logging errors occur
}Error Types:
ErrOverflow - Buffer capacity exceeded, entries droppedErrOversizedEntry - Single entry exceeds size limitCallback Behavior:
log.Printf()Synchronous Error Handling:
LogSync() returns errors directlyThe library automatically extracts distributed tracing context from HTTP requests:
Trace Extraction Flow:
Entry.HTTPRequest.Request is provided:
Entry.Trace, Entry.SpanID, Entry.TraceSampled automaticallyIntegration with Cloud Trace:
projects/PROJECT_ID/traces/TRACE_ID//tracing.googleapis.comFor applications logging thousands of entries per second:
client, _ := logging.NewClient(ctx, "my-project")
logger := client.Logger("high-volume-log",
logging.ConcurrentWriteLimit(10), // 10 parallel write goroutines
logging.EntryCountThreshold(500), // Smaller batches
logging.DelayThreshold(500*time.Millisecond), // Faster flush
)
// Non-blocking, buffered logging
for event := range eventStream {
logger.Log(logging.Entry{
Payload: event,
Severity: logging.Info,
})
}For errors that must be delivered immediately:
// Asynchronous logging for normal events
logger.Log(logging.Entry{
Payload: "processing request",
Severity: logging.Info,
})
// Synchronous logging for critical errors
err := logger.LogSync(ctx, logging.Entry{
Payload: "database connection lost",
Severity: logging.Critical,
})
if err != nil {
// Fallback: Log to local file, send alert, etc.
}For services that need consistent metadata across all logs:
logger := client.Logger("service-log",
logging.CommonLabels(map[string]string{
"service": "api-server",
"environment": "production",
"version": "1.2.3",
}),
logging.CommonResource(&mrpb.MonitoredResource{
Type: "k8s_container",
Labels: map[string]string{
"project_id": "my-project",
"cluster_name": "prod-cluster",
"namespace_name": "default",
"pod_name": os.Getenv("POD_NAME"),
},
}),
)For analyzing logs and exporting to data warehouses:
// Use logadmin package for reading and management
adminClient, _ := logadmin.NewClient(ctx, "my-project")
// Create sink to export error logs to BigQuery
sink := &logadmin.Sink{
ID: "errors-to-bigquery",
Destination: "bigquery.googleapis.com/projects/my-project/datasets/logs",
Filter: "severity >= ERROR",
}
adminClient.CreateSink(ctx, sink)
// Create metric to count critical errors
metric := &logadmin.Metric{
ID: "critical-error-count",
Description: "Count of critical errors",
Filter: "severity >= CRITICAL",
}
adminClient.CreateMetric(ctx, metric)
// Query recent error logs
iter := adminClient.Entries(ctx,
logadmin.Filter("severity >= ERROR"),
logadmin.NewestFirst(),
)logging package when:logadmin package when:apiv2 package when:Recommendation: Start with logging and logadmin packages. Only use apiv2 if you have specific advanced requirements.
Asynchronous Logging (Log method):
Synchronous Logging (LogSync method):
Best Practice: Use Log() for 99% of logging, reserve LogSync() for critical errors.
Entry Iteration:
Resource Listing (Sinks, Metrics, Logs):