or run

tessl search
Log in

Version

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
golangpkg:golang/cloud.google.com/go/logging@v1.13.1

docs

architecture.mdclient-logger.mdconstants.mderrors.mdhttp-request.mdindex.mdlogadmin.mdlogger-config.mdpayloads.mdseverity.mdstandard-logger.mdtracing.mdwriting-logs.md
tile.json

tessl/golang-cloud-google-com--go--logging

tessl install tessl/golang-cloud-google-com--go--logging@1.13.0

Cloud Logging client library for Go that enables writing log entries to Google Cloud Logging service with buffered asynchronous and synchronous logging capabilities.

logger-config.mddocs/

Logger Configuration

This document describes all available configuration options for loggers created with Client.Logger().

LoggerOption Type

type LoggerOption interface {
    // unexported methods
}

LoggerOption is a configuration option for a Logger. Options are passed to Client.Logger() when creating a logger.

Common Labels

func CommonLabels(m map[string]string) LoggerOption

CommonLabels are labels that apply to all log entries written from a Logger, so you don't have to repeat them in each log entry's Labels field. If any of the log entries contains a (key, value) with the same key that is in CommonLabels, then the entry's (key, value) overrides the one in CommonLabels.

Parameters:

  • m - Map of label key-value pairs

Example:

logger := client.Logger("app-log",
    logging.CommonLabels(map[string]string{
        "environment": "production",
        "service":     "api-server",
        "version":     "1.2.3",
    }),
)

// This entry inherits all common labels
logger.Log(logging.Entry{
    Payload: "request processed",
})

// This entry overrides the "environment" label
logger.Log(logging.Entry{
    Payload: "test message",
    Labels: map[string]string{
        "environment": "staging", // Overrides common label
        "request_id":  "req-123",  // Additional label
    },
})

Common Resource

func CommonResource(r *mrpb.MonitoredResource) LoggerOption

CommonResource sets the monitored resource associated with all log entries written from a Logger. If not provided, the resource is automatically detected based on the running environment (on GCE, GCR, GCF, and GAE Standard only). This value can be overridden per-entry by setting an Entry's Resource field.

Parameters:

  • r - Monitored resource to associate with all entries

Example:

import (
    mrpb "google.golang.org/genproto/googleapis/api/monitoredres"
)

logger := client.Logger("app-log",
    logging.CommonResource(&mrpb.MonitoredResource{
        Type: "gce_instance",
        Labels: map[string]string{
            "project_id":  "my-project",
            "instance_id": "1234567890",
            "zone":        "us-central1-a",
        },
    }),
)

Concurrent Write Limit

func ConcurrentWriteLimit(n int) LoggerOption

ConcurrentWriteLimit determines how many goroutines will send log entries to the underlying service. The default is 1. Set ConcurrentWriteLimit to a higher value to increase throughput.

Parameters:

  • n - Number of concurrent goroutines (must be > 0)

Example:

// Use 5 concurrent goroutines for higher throughput
logger := client.Logger("high-volume-log",
    logging.ConcurrentWriteLimit(5),
)

Delay Threshold

func DelayThreshold(d time.Duration) LoggerOption

DelayThreshold is the maximum amount of time that an entry should remain buffered in memory before a call to the logging service is triggered. Larger values of DelayThreshold will generally result in fewer calls to the logging service, while increasing the risk that log entries will be lost if the process crashes. The default is DefaultDelayThreshold (1 second).

Parameters:

  • d - Maximum delay duration

Example:

import "time"

// Flush more frequently (every 500ms)
logger := client.Logger("frequent-log",
    logging.DelayThreshold(500 * time.Millisecond),
)

// Flush less frequently (every 5 seconds) for better batching
logger2 := client.Logger("batch-log",
    logging.DelayThreshold(5 * time.Second),
)

Entry Count Threshold

func EntryCountThreshold(n int) LoggerOption

EntryCountThreshold is the maximum number of entries that will be buffered in memory before a call to the logging service is triggered. Larger values will generally result in fewer calls to the logging service, while increasing both memory consumption and the risk that log entries will be lost if the process crashes. The default is DefaultEntryCountThreshold (1000).

Parameters:

  • n - Maximum number of entries to buffer

Example:

// Flush after 500 entries
logger := client.Logger("small-batch-log",
    logging.EntryCountThreshold(500),
)

// Flush after 5000 entries for larger batches
logger2 := client.Logger("large-batch-log",
    logging.EntryCountThreshold(5000),
)

Entry Byte Threshold

func EntryByteThreshold(n int) LoggerOption

EntryByteThreshold is the maximum number of bytes of entries that will be buffered in memory before a call to the logging service is triggered. See EntryCountThreshold for a discussion of the tradeoffs involved in setting this option. The default is DefaultEntryByteThreshold (8 MiB = 8388608 bytes).

Parameters:

  • n - Maximum number of bytes to buffer

Example:

// Flush after 4 MiB
logger := client.Logger("medium-batch-log",
    logging.EntryByteThreshold(4 * 1024 * 1024),
)

// Flush after 16 MiB
logger2 := client.Logger("large-batch-log",
    logging.EntryByteThreshold(16 * 1024 * 1024),
)

Entry Byte Limit

func EntryByteLimit(n int) LoggerOption

EntryByteLimit is the maximum number of bytes of entries that will be sent in a single call to the logging service. ErrOversizedEntry is returned if an entry exceeds EntryByteLimit. This option limits the size of a single RPC payload, to account for network or service issues with large RPCs. If EntryByteLimit is smaller than EntryByteThreshold, the latter has no effect. The default is zero, meaning there is no limit.

Parameters:

  • n - Maximum bytes per RPC call (0 = no limit)

Example:

// Limit each RPC to 5 MiB
logger := client.Logger("limited-log",
    logging.EntryByteLimit(5 * 1024 * 1024),
)

Buffered Byte Limit

func BufferedByteLimit(n int) LoggerOption

BufferedByteLimit is the maximum number of bytes that the Logger will keep in memory before returning ErrOverflow. This option limits the total memory consumption of the Logger (but note that each Logger has its own, separate limit). It is possible to reach BufferedByteLimit even if it is larger than EntryByteThreshold or EntryByteLimit, because calls triggered by the latter two options may be enqueued (and hence occupying memory) while new log entries are being added. The default is DefaultBufferedByteLimit (1 GiB = 1073741824 bytes).

Parameters:

  • n - Maximum bytes to buffer in memory

Example:

// Limit memory usage to 512 MiB
logger := client.Logger("memory-limited-log",
    logging.BufferedByteLimit(512 * 1024 * 1024),
)

Context Function

func ContextFunc(f func() (ctx context.Context, afterCall func())) LoggerOption

ContextFunc is a function that will be called to obtain a context.Context for the WriteLogEntries RPC executed in the background for calls to Logger.Log. The default is a function that always returns context.Background. The second return value of the function is a function to call after the RPC completes.

The function is not used for calls to Logger.LogSync, since the caller can pass in the context directly.

This option is EXPERIMENTAL. It may be changed or removed.

Parameters:

  • f - Function that returns a context and an optional cleanup function

Example:

logger := client.Logger("traced-log",
    logging.ContextFunc(func() (context.Context, func()) {
        ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
        return ctx, cancel
    }),
)

Source Location Population

func SourceLocationPopulation(f int) LoggerOption

SourceLocationPopulation is the flag controlling population of the source location info in the ingested entries. This option allows you to configure automatic population of the SourceLocation field for all ingested entries, entries with DEBUG severity, or disable it. Note that enabling this option can decrease execution time of Logger.Log and Logger.LogSync by a factor of 2 or larger. The default disables source location population.

This option is not used when an entry is created using ToLogEntry.

Parameters:

  • f - Source location population mode:
    • DoNotPopulateSourceLocation (0) - Default, disables source location
    • PopulateSourceLocationForDebugEntries (1) - Only for Debug severity
    • AlwaysPopulateSourceLocation (2) - For all entries

Example:

// Populate source location for debug entries only
logger := client.Logger("debug-log",
    logging.SourceLocationPopulation(logging.PopulateSourceLocationForDebugEntries),
)

// Always populate source location (warning: performance impact)
logger2 := client.Logger("traced-log",
    logging.SourceLocationPopulation(logging.AlwaysPopulateSourceLocation),
)

Constants:

const (
    DoNotPopulateSourceLocation           = 0
    PopulateSourceLocationForDebugEntries = 1
    AlwaysPopulateSourceLocation          = 2
)

Partial Success

func PartialSuccess() LoggerOption

PartialSuccess sets the partialSuccess flag to true when ingesting a bundle of log entries. See https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/write#body.request_body.FIELDS.partial_success

If not provided, the partialSuccess flag is set to false.

Example:

logger := client.Logger("partial-success-log",
    logging.PartialSuccess(),
)

Redirect as JSON

func RedirectAsJSON(w io.Writer) LoggerOption

RedirectAsJSON instructs Logger to redirect output of calls to Log and LogSync to the provided io.Writer instead of ingesting to Cloud Logging. Logger formats log entries following the logging agent's JSON format. See https://cloud.google.com/logging/docs/structured-logging#special-payload-fields for more info about the format. Use this option to delegate log ingestion to an out-of-process logging agent. If no writer is provided, the redirect is set to stdout.

Parameters:

  • w - io.Writer to redirect output to (nil defaults to os.Stdout)

Example:

import "os"

// Redirect to stdout for agent-based ingestion
logger := client.Logger("agent-log",
    logging.RedirectAsJSON(os.Stdout),
)

// Redirect to stderr
logger2 := client.Logger("error-log",
    logging.RedirectAsJSON(os.Stderr),
)

// Redirect to a file
file, err := os.Create("/var/log/app.json")
if err != nil {
    // Handle error
}
defer file.Close()

logger3 := client.Logger("file-log",
    logging.RedirectAsJSON(file),
)

Combining Multiple Options

Multiple logger options can be combined when creating a logger:

import (
    "os"
    "time"
    "cloud.google.com/go/logging"
    mrpb "google.golang.org/genproto/googleapis/api/monitoredres"
)

logger := client.Logger("production-log",
    // Common labels for all entries
    logging.CommonLabels(map[string]string{
        "environment": "production",
        "service":     "api-server",
        "version":     "2.1.0",
    }),

    // Custom monitored resource
    logging.CommonResource(&mrpb.MonitoredResource{
        Type: "gce_instance",
        Labels: map[string]string{
            "project_id":  "my-project",
            "instance_id": "instance-123",
            "zone":        "us-central1-a",
        },
    }),

    // Performance tuning
    logging.ConcurrentWriteLimit(3),
    logging.DelayThreshold(2 * time.Second),
    logging.EntryCountThreshold(2000),
    logging.EntryByteThreshold(10 * 1024 * 1024), // 10 MiB
    logging.BufferedByteLimit(100 * 1024 * 1024), // 100 MiB

    // Populate source location for debug entries
    logging.SourceLocationPopulation(logging.PopulateSourceLocationForDebugEntries),

    // Enable partial success
    logging.PartialSuccess(),
)

Configuration Best Practices

High-Throughput Scenarios

For applications with high logging volume:

logger := client.Logger("high-volume-log",
    logging.ConcurrentWriteLimit(5),
    logging.DelayThreshold(3 * time.Second),
    logging.EntryCountThreshold(5000),
    logging.EntryByteThreshold(16 * 1024 * 1024),
)

Low-Latency Scenarios

For applications that need log entries delivered quickly:

logger := client.Logger("low-latency-log",
    logging.DelayThreshold(500 * time.Millisecond),
    logging.EntryCountThreshold(100),
)

Memory-Constrained Scenarios

For applications with limited memory:

logger := client.Logger("memory-limited-log",
    logging.BufferedByteLimit(50 * 1024 * 1024), // 50 MiB
    logging.EntryByteThreshold(5 * 1024 * 1024),  // 5 MiB
    logging.EntryCountThreshold(500),
)

Agent-Based Ingestion

For environments with logging agents (GKE, Cloud Run, etc.):

import "os"

logger := client.Logger("agent-log",
    logging.RedirectAsJSON(os.Stdout),
    logging.CommonLabels(map[string]string{
        "service": "my-service",
    }),
)