or run

tessl search
Log in

Version

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
golangpkg:golang/cloud.google.com/go/logging@v1.13.1

docs

architecture.mdclient-logger.mdconstants.mderrors.mdhttp-request.mdindex.mdlogadmin.mdlogger-config.mdpayloads.mdseverity.mdstandard-logger.mdtracing.mdwriting-logs.md
tile.json

tessl/golang-cloud-google-com--go--logging

tessl install tessl/golang-cloud-google-com--go--logging@1.13.0

Cloud Logging client library for Go that enables writing log entries to Google Cloud Logging service with buffered asynchronous and synchronous logging capabilities.

writing-logs.mddocs/

Writing Log Entries

This document describes how to write log entries to Cloud Logging using both asynchronous buffered logging and synchronous logging.

Entry to Proto Conversion (Advanced)

func ToLogEntry(e Entry, parent string) (*logpb.LogEntry, error)

ToLogEntry converts an Entry structure to the LogEntry proto. This function is implied when you use Logger.Log or Logger.LogSync, but it's exported as a public function to give you additional flexibility when using the library. Don't call this method manually if Logger.Log or Logger.LogSync are used. It is intended to be used together with direct calls to WriteLogEntries method.

Parameters:

  • e - The Entry to convert
  • parent - Parent resource in one of these forms:
    • projects/PROJECT_ID
    • folders/FOLDER_ID
    • billingAccounts/ACCOUNT_ID
    • organizations/ORG_ID
    • Project ID string (backwards compatibility)

Returns:

  • *logpb.LogEntry - The converted proto entry
  • error - Error if conversion fails

There's also a Logger method variant:

func (l *Logger) ToLogEntry(e Entry, parent string) (*logpb.LogEntry, error)

This is the Logger instance method for the same functionality.

Example:

import (
    "cloud.google.com/go/logging"
    logpb "cloud.google.com/go/logging/apiv2/loggingpb"
)

entry := logging.Entry{
    Payload:  "test message",
    Severity: logging.Info,
}

// Convert to proto
logEntry, err := logging.ToLogEntry(entry, "projects/my-project")
if err != nil {
    // Handle error
}

// Now you can use logEntry with direct API calls
// This is for advanced use cases only

Asynchronous Logging

func (l *Logger) Log(e Entry)

Buffers the Entry for output to the logging service. This method never blocks and is the recommended approach for most logging operations. Log entries are buffered in memory and periodically flushed automatically based on configured thresholds (time, count, size).

Parameters:

  • e - The Entry to log

Example:

logger := client.Logger("my-log")

// Simple string payload
logger.Log(logging.Entry{
    Payload: "something happened",
})

// With severity
logger.Log(logging.Entry{
    Payload:  "user logged in",
    Severity: logging.Info,
})

// With labels
logger.Log(logging.Entry{
    Payload: "payment processed",
    Severity: logging.Info,
    Labels: map[string]string{
        "user_id":    "12345",
        "amount":     "99.99",
        "currency":   "USD",
    },
})

// With timestamp
logger.Log(logging.Entry{
    Payload:   "scheduled task executed",
    Severity:  logging.Info,
    Timestamp: time.Now(),
})

Synchronous Logging

func (l *Logger) LogSync(ctx context.Context, e Entry) error

Logs the Entry synchronously without any buffering. This method blocks until the log entry has been sent to the logging service. It is slow and should be used primarily for debugging or critical errors that must be sent immediately.

Parameters:

  • ctx - Context for the operation
  • e - The Entry to log

Returns:

  • error - Error if logging fails

Example:

ctx := context.Background()
logger := client.Logger("my-log")

// Log critical error synchronously
err := logger.LogSync(ctx, logging.Entry{
    Payload:  "database connection failed",
    Severity: logging.Critical,
    Labels: map[string]string{
        "database": "postgres-primary",
        "error_code": "CONNECTION_REFUSED",
    },
})
if err != nil {
    log.Printf("failed to log critical error: %v", err)
}

// Log with context deadline
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

err = logger.LogSync(ctx, logging.Entry{
    Payload:  "emergency shutdown initiated",
    Severity: logging.Emergency,
})
if err != nil {
    log.Printf("failed to log emergency: %v", err)
}

Flushing Buffered Entries

func (l *Logger) Flush() error

Blocks until all currently buffered log entries are sent. If any errors occurred since the last call to Flush (or the creation of the client if this is the first call), then Flush returns a non-nil error with summary information about the errors. For more accurate error reporting, set Client.OnError.

Returns:

  • error - Error if flushing fails or if previous operations had errors

Example:

logger := client.Logger("my-log")

// Log several entries
for i := 0; i < 100; i++ {
    logger.Log(logging.Entry{
        Payload: fmt.Sprintf("processing item %d", i),
    })
}

// Flush all buffered entries
if err := logger.Flush(); err != nil {
    log.Printf("flush failed: %v", err)
}

Entry Type

type Entry struct {
    Timestamp      time.Time
    Severity       Severity
    Payload        interface{}
    Labels         map[string]string
    InsertID       string
    HTTPRequest    *HTTPRequest
    Operation      *logpb.LogEntryOperation
    LogName        string
    Resource       *mrpb.MonitoredResource
    Trace          string
    SpanID         string
    TraceSampled   bool
    SourceLocation *logpb.LogEntrySourceLocation
}

Entry is a log entry. See https://cloud.google.com/logging/docs/view/logs_index for more about entries.

Fields:

  • Timestamp time.Time - Time of the entry. If zero, the current time is used.

  • Severity Severity - Entry's severity level. The zero value is Default.

  • Payload interface{} - Entry payload. Must be either a string, or something that marshals via encoding/json to a JSON object (not any other type of JSON value). Can also be:

    • A string
    • A value that marshals to JSON object (map[string]interface{}, struct)
    • json.RawMessage for raw JSON bytes
    • anypb.Any for protobuf messages
  • Labels map[string]string - Optional key/value labels for the log entry. The Logger.Log method takes ownership of this map.

  • InsertID string - Unique ID for the log entry. If provided, the logging service considers other log entries in the same log with the same ID as duplicates which can be removed. If omitted, the logging service will generate a unique ID. Note that because this client retries RPCs automatically, it is possible (though unlikely) that an Entry without an InsertID will be written more than once.

  • HTTPRequest *HTTPRequest - Optional metadata about the HTTP request associated with this log entry.

  • Operation *logpb.LogEntryOperation - Optional information about an operation associated with the log entry.

  • LogName string - Full log name, in the form "projects/{ProjectID}/logs/{LogID}". Set by the client when reading entries. It is an error to set it when writing entries.

  • Resource *mrpb.MonitoredResource - Monitored resource associated with the entry.

  • Trace string - Resource name of the trace associated with the log entry, if any. If it contains a relative resource name, the name is assumed to be relative to //tracing.googleapis.com.

  • SpanID string - ID of the span within the trace associated with the log entry. The ID is a 16-character hexadecimal encoding of an 8-byte array.

  • TraceSampled bool - If set, indicates that this request was sampled.

  • SourceLocation *logpb.LogEntrySourceLocation - Optional source code location information associated with the log entry.

Using Insert IDs for Deduplication

Insert IDs can be used to prevent duplicate log entries when retrying operations:

// Generate a unique ID for this log entry
insertID := fmt.Sprintf("%s-%d", operationID, timestamp.Unix())

logger.Log(logging.Entry{
    Payload:  "operation completed",
    InsertID: insertID,
    Severity: logging.Info,
})

// If this log entry is sent multiple times (e.g., due to retries),
// Cloud Logging will deduplicate based on the InsertID

Logging with Operations

Associate log entries with long-running operations:

import (
    logpb "cloud.google.com/go/logging/apiv2/loggingpb"
)

operationID := "op-12345"

// Log start of operation
logger.Log(logging.Entry{
    Payload:  "operation started",
    Severity: logging.Info,
    Operation: &logpb.LogEntryOperation{
        Id:       operationID,
        Producer: "my-service",
        First:    true,
    },
})

// Log progress
logger.Log(logging.Entry{
    Payload:  "operation in progress",
    Severity: logging.Info,
    Operation: &logpb.LogEntryOperation{
        Id:       operationID,
        Producer: "my-service",
    },
})

// Log completion
logger.Log(logging.Entry{
    Payload:  "operation completed",
    Severity: logging.Info,
    Operation: &logpb.LogEntryOperation{
        Id:       operationID,
        Producer: "my-service",
        Last:     true,
    },
})

Specifying Monitored Resources

Override the default monitored resource for specific log entries:

import (
    mrpb "google.golang.org/genproto/googleapis/api/monitoredres"
)

logger.Log(logging.Entry{
    Payload:  "custom resource entry",
    Severity: logging.Info,
    Resource: &mrpb.MonitoredResource{
        Type: "gce_instance",
        Labels: map[string]string{
            "project_id":  "my-project",
            "instance_id": "1234567890",
            "zone":        "us-central1-a",
        },
    },
})

Complete Example

package main

import (
    "context"
    "fmt"
    "log"
    "time"

    "cloud.google.com/go/logging"
    logpb "cloud.google.com/go/logging/apiv2/loggingpb"
)

func main() {
    ctx := context.Background()

    client, err := logging.NewClient(ctx, "my-project")
    if err != nil {
        log.Fatalf("failed to create client: %v", err)
    }
    defer client.Close()

    logger := client.Logger("app-log")

    // Asynchronous logging (buffered)
    for i := 0; i < 10; i++ {
        logger.Log(logging.Entry{
            Payload:  fmt.Sprintf("processing item %d", i),
            Severity: logging.Info,
            Labels: map[string]string{
                "batch_id": "batch-001",
                "item_num": fmt.Sprintf("%d", i),
            },
        })
    }

    // Synchronous logging for critical event
    err = logger.LogSync(ctx, logging.Entry{
        Payload:  "critical system event",
        Severity: logging.Critical,
        Labels: map[string]string{
            "component": "payment-processor",
        },
    })
    if err != nil {
        log.Printf("failed to log critical event: %v", err)
    }

    // Log with operation tracking
    operationID := "op-" + time.Now().Format("20060102150405")
    logger.Log(logging.Entry{
        Payload:  "long running operation started",
        Severity: logging.Info,
        Operation: &logpb.LogEntryOperation{
            Id:       operationID,
            Producer: "my-service",
            First:    true,
        },
    })

    // Flush all buffered entries before exit
    if err := logger.Flush(); err != nil {
        log.Printf("failed to flush: %v", err)
    }
}