or run

tessl search
Log in

Version

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
golangpkg:golang/cloud.google.com/go/logging@v1.13.1

docs

architecture.mdclient-logger.mdconstants.mderrors.mdhttp-request.mdindex.mdlogadmin.mdlogger-config.mdpayloads.mdseverity.mdstandard-logger.mdtracing.mdwriting-logs.md
tile.json

tessl/golang-cloud-google-com--go--logging

tessl install tessl/golang-cloud-google-com--go--logging@1.13.0

Cloud Logging client library for Go that enables writing log entries to Google Cloud Logging service with buffered asynchronous and synchronous logging capabilities.

architecture.mddocs/

Package Architecture

This document describes the architecture and design of the cloud.google.com/go/logging package, explaining how the different components work together and when to use each one.

Package Structure

The Cloud Logging library is organized into three main packages:

1. Main Package: cloud.google.com/go/logging

Purpose: Writing log entries to Cloud Logging

Primary Use Case: Application logging, metrics collection, and structured logging

Key Components:

  • Client - Manages connection to Cloud Logging service
  • Logger - Writes log entries to a specific log
  • Entry - Represents a single log entry with payload, severity, metadata
  • Logger configuration options (buffering, concurrency, labels, resources)

Design Philosophy:

  • High throughput - Buffered, asynchronous logging by default
  • Low latency impact - Non-blocking Log() method
  • Flexible payloads - Supports strings, JSON objects, structs, protobuf messages
  • Auto-detection - Automatically detects project ID and monitored resources

2. Admin Package: cloud.google.com/go/logging/logadmin

Purpose: Reading logs and managing logging infrastructure

Primary Use Case: Log analysis, debugging, infrastructure management, compliance

Key Components:

  • Client - Separate admin client with read/admin permissions
  • Iterators for reading logs, listing resources
  • Sink - Export logs to external destinations (Cloud Storage, BigQuery, Pub/Sub)
  • Metric - Create logs-based metrics from log data

Design Philosophy:

  • Separation of concerns - Write operations in main package, read/admin operations here
  • Flexible querying - Advanced filter syntax for log searching
  • Infrastructure as code - Programmatic management of sinks, metrics, resources
  • Pagination - Iterator pattern for handling large result sets

3. Low-Level Package: cloud.google.com/go/logging/apiv2 (Advanced)

Purpose: Direct gRPC access to Cloud Logging API

Primary Use Case: Advanced scenarios requiring fine-grained control

Note: This package contains auto-generated gRPC client code. Most users should use the main logging and logadmin packages instead. The apiv2 package is only needed for:

  • Custom retry logic
  • Non-standard authentication flows
  • Direct protocol buffer manipulation
  • Use cases not covered by high-level clients

The main logging package internally uses apiv2 clients but provides a much more ergonomic interface.

Core Architectural Patterns

Buffering and Batching

The main logging package uses sophisticated buffering to achieve high throughput:

Buffering Mechanism:

  1. Logger.Log(entry) immediately adds entry to an in-memory buffer and returns
  2. Multiple entries are batched together
  3. Batches are flushed automatically when any threshold is reached:
    • Time threshold (DelayThreshold): Max time entries wait in buffer (default: 1 second)
    • Count threshold (EntryCountThreshold): Max number of entries per batch (default: 1000)
    • Size threshold (EntryByteThreshold): Max total bytes per batch (default: 8 MiB)

Concurrency:

  • Configurable write concurrency with ConcurrentWriteLimit option
  • Default: 1 goroutine writes batches sequentially
  • Higher values (e.g., 5) increase throughput for high-volume logging
  • Multiple goroutines process batches in parallel

Buffer Management:

  • BufferedByteLimit controls total memory used for buffering (default: 1 GiB)
  • If buffer fills, oldest entries may be dropped (triggers OnError callback with ErrOverflow)
  • Manual flushing available via Logger.Flush() or automatic on Client.Close()

Synchronous Alternative:

  • Logger.LogSync() bypasses buffering entirely
  • Blocks until entry is sent to service
  • Use sparingly (only for critical errors requiring immediate delivery)

Resource Auto-Detection

The library automatically detects the appropriate monitored resource based on the runtime environment:

Detection Order:

  1. Check if running on Google Compute Engine (GCE) → Use gce_instance resource
  2. Check if running on Google Kubernetes Engine (GKE) → Use k8s_container resource
  3. Check if running on Google App Engine (GAE) → Use gae_app resource
  4. Check if running on Cloud Run → Use cloud_run_revision resource
  5. Check if running on Cloud Functions → Use cloud_function resource
  6. Fallback → Use global resource type

Override Detection:

  • Use CommonResource() logger option to specify a custom monitored resource
  • Per-entry override available via Entry.Resource field
  • Useful for logging on behalf of specific resources

Error Handling Architecture

Errors in asynchronous logging are handled via callback:

Client.OnError Callback:

client.OnError = func(err error) {
    // Called when logging errors occur
}

Error Types:

  • ErrOverflow - Buffer capacity exceeded, entries dropped
  • ErrOversizedEntry - Single entry exceeds size limit
  • Network/RPC errors from Cloud Logging service
  • Validation errors (invalid Entry fields)

Callback Behavior:

  • Never called concurrently (safe to update shared state)
  • Should return quickly (don't block)
  • Default behavior: calls log.Printf()
  • Set callback before creating loggers

Synchronous Error Handling:

  • LogSync() returns errors directly
  • Use for critical errors requiring guaranteed delivery confirmation

Trace Context Integration

The library automatically extracts distributed tracing context from HTTP requests:

Trace Extraction Flow:

  1. If Entry.HTTPRequest.Request is provided:
    • Priority 1: Check for OpenTelemetry span context (if instrumented with otelhttp)
    • Priority 2: Parse W3C Traceparent header
    • Priority 3: Parse X-Cloud-Trace-Context header (legacy)
  2. Populate Entry.Trace, Entry.SpanID, Entry.TraceSampled automatically
  3. Manual override: Explicitly set Trace/SpanID fields to skip auto-detection

Integration with Cloud Trace:

  • Trace field format: projects/PROJECT_ID/traces/TRACE_ID
  • Relative format also accepted: Automatically prefixed with //tracing.googleapis.com
  • Links log entries to traces in Cloud Trace UI for distributed debugging

Common Architectural Patterns

Pattern 1: High-Throughput Logging

For applications logging thousands of entries per second:

client, _ := logging.NewClient(ctx, "my-project")

logger := client.Logger("high-volume-log",
    logging.ConcurrentWriteLimit(10),       // 10 parallel write goroutines
    logging.EntryCountThreshold(500),        // Smaller batches
    logging.DelayThreshold(500*time.Millisecond), // Faster flush
)

// Non-blocking, buffered logging
for event := range eventStream {
    logger.Log(logging.Entry{
        Payload: event,
        Severity: logging.Info,
    })
}

Pattern 2: Critical Error Logging

For errors that must be delivered immediately:

// Asynchronous logging for normal events
logger.Log(logging.Entry{
    Payload: "processing request",
    Severity: logging.Info,
})

// Synchronous logging for critical errors
err := logger.LogSync(ctx, logging.Entry{
    Payload: "database connection lost",
    Severity: logging.Critical,
})
if err != nil {
    // Fallback: Log to local file, send alert, etc.
}

Pattern 3: Structured Logging with Common Context

For services that need consistent metadata across all logs:

logger := client.Logger("service-log",
    logging.CommonLabels(map[string]string{
        "service":     "api-server",
        "environment": "production",
        "version":     "1.2.3",
    }),
    logging.CommonResource(&mrpb.MonitoredResource{
        Type: "k8s_container",
        Labels: map[string]string{
            "project_id":     "my-project",
            "cluster_name":   "prod-cluster",
            "namespace_name": "default",
            "pod_name":       os.Getenv("POD_NAME"),
        },
    }),
)

Pattern 4: Log Analysis and Export

For analyzing logs and exporting to data warehouses:

// Use logadmin package for reading and management
adminClient, _ := logadmin.NewClient(ctx, "my-project")

// Create sink to export error logs to BigQuery
sink := &logadmin.Sink{
    ID:          "errors-to-bigquery",
    Destination: "bigquery.googleapis.com/projects/my-project/datasets/logs",
    Filter:      "severity >= ERROR",
}
adminClient.CreateSink(ctx, sink)

// Create metric to count critical errors
metric := &logadmin.Metric{
    ID:          "critical-error-count",
    Description: "Count of critical errors",
    Filter:      "severity >= CRITICAL",
}
adminClient.CreateMetric(ctx, metric)

// Query recent error logs
iter := adminClient.Entries(ctx,
    logadmin.Filter("severity >= ERROR"),
    logadmin.NewestFirst(),
)

Package Selection Guide

Use logging package when:

  • ✓ Writing logs from applications
  • ✓ Instrumenting services with structured logging
  • ✓ Collecting metrics and events
  • ✓ Building logging libraries or frameworks

Use logadmin package when:

  • ✓ Reading or querying logs
  • ✓ Building log analysis tools
  • ✓ Exporting logs to external systems
  • ✓ Managing logging infrastructure (sinks, metrics)
  • ✓ Compliance and audit requirements

Use apiv2 package when:

  • ✓ Need direct gRPC access
  • ✓ Custom retry or batching logic required
  • ✓ Working with protocol buffers directly
  • ✓ High-level packages don't support your use case

Recommendation: Start with logging and logadmin packages. Only use apiv2 if you have specific advanced requirements.

Performance Characteristics

Logging Package (Write Operations)

Asynchronous Logging (Log method):

  • Latency: < 1 microsecond (buffer insertion)
  • Throughput: Tens of thousands of entries/second per Logger
  • Memory: Bounded by BufferedByteLimit (default 1 GiB)
  • Network: Batched RPCs reduce API call overhead

Synchronous Logging (LogSync method):

  • Latency: 50-200ms (network round-trip + API processing)
  • Throughput: ~5-20 entries/second per Logger
  • Memory: Minimal (no buffering)
  • Network: One RPC per entry (high overhead)

Best Practice: Use Log() for 99% of logging, reserve LogSync() for critical errors.

Logadmin Package (Read Operations)

Entry Iteration:

  • Pagination: 1000 entries per page (configurable with PageSize)
  • Latency: 100-500ms per page load
  • Filtering: Server-side filtering reduces data transfer
  • Result ordering: Configurable (oldest-first or newest-first)

Resource Listing (Sinks, Metrics, Logs):

  • Pagination: Automatic via iterator pattern
  • Caching: None (always fetches fresh data)
  • Latency: 50-200ms per page

Related Documentation

  • Client and Logger Management
  • Writing Log Entries
  • Logger Configuration
  • Administrative Operations
  • Error Handling