CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/golang-github-com-jackc-pgx-v5

pgx is a pure Go driver and toolkit for PostgreSQL providing a native high-performance interface with PostgreSQL-specific features plus a database/sql compatibility adapter.

Pending
Overview
Eval results
Files

copy.mddocs/

COPY Protocol & Large Objects

Import

import "github.com/jackc/pgx/v5"

COPY FROM (bulk insert)

Use CopyFrom to efficiently bulk-insert rows using the PostgreSQL binary COPY protocol. Faster than INSERT for 5+ rows.

func (c *Conn) CopyFrom(ctx context.Context, tableName Identifier, columnNames []string, rowSrc CopyFromSource) (int64, error)

Returns the number of rows copied.

Requirement: All column types must support binary format encoding (pgtype must have a binary codec for each type). Enum types must be registered even though they appear as strings.

CopyFromSource Interface

type CopyFromSource interface {
    Next() bool
    Values() ([]any, error)
    Err() error
}

Built-in CopyFromSource Implementations

CopyFromRows

func CopyFromRows(rows [][]any) CopyFromSource

Wrap an existing [][]any slice:

rows := [][]any{
    {"John", "Smith", int32(36)},
    {"Jane", "Doe", int32(29)},
}
count, err := conn.CopyFrom(ctx,
    pgx.Identifier{"people"},
    []string{"first_name", "last_name", "age"},
    pgx.CopyFromRows(rows),
)

CopyFromSlice

func CopyFromSlice(length int, next func(int) ([]any, error)) CopyFromSource

Wrap a typed slice with an indexing function:

type User struct { FirstName, LastName string; Age int32 }
users := []User{{"John", "Smith", 36}, {"Jane", "Doe", 29}}

count, err := conn.CopyFrom(ctx,
    pgx.Identifier{"people"},
    []string{"first_name", "last_name", "age"},
    pgx.CopyFromSlice(len(users), func(i int) ([]any, error) {
        return []any{users[i].FirstName, users[i].LastName, users[i].Age}, nil
    }),
)

CopyFromFunc

func CopyFromFunc(nxtf func() (row []any, err error)) CopyFromSource

Wrap a function that returns rows on demand. Return row=nil, err=nil to signal end of data:

count, err := conn.CopyFrom(ctx,
    pgx.Identifier{"events"},
    []string{"name", "ts"},
    pgx.CopyFromFunc(func() ([]any, error) {
        row := fetchNextRow()
        if row == nil {
            return nil, nil // done
        }
        return []any{row.Name, row.Timestamp}, nil
    }),
)

Large Objects

Large objects store binary data on the PostgreSQL server and support random-access reads and writes.

Requirement: Large object operations must occur within a transaction.

type LargeObjects struct { /* unexported */ }

func (o *LargeObjects) Create(ctx context.Context, oid uint32) (uint32, error)
func (o *LargeObjects) Open(ctx context.Context, oid uint32, mode LargeObjectMode) (*LargeObject, error)
func (o *LargeObjects) Unlink(ctx context.Context, oid uint32) error
type LargeObjectMode int32

const (
    LargeObjectModeWrite LargeObjectMode = 0x20000
    LargeObjectModeRead  LargeObjectMode = 0x40000
)

Obtain LargeObjects from a Tx:

tx, err := conn.Begin(ctx)
defer tx.Rollback(ctx)

los := tx.LargeObjects()

LargeObject

type LargeObject struct { /* unexported */ }

func (o *LargeObject) Read(p []byte) (int, error)
func (o *LargeObject) Write(p []byte) (int, error)
func (o *LargeObject) Seek(offset int64, whence int) (int64, error)
func (o *LargeObject) Tell() (n int64, err error)
func (o *LargeObject) Truncate(size int64) error
func (o *LargeObject) Close() error

Implements io.Reader, io.Writer, io.Seeker, io.Closer.

Large Object Example

tx, err := conn.Begin(ctx)
defer tx.Rollback(ctx)

los := tx.LargeObjects()

// Create a new large object (oid=0 means server assigns OID)
oid, err := los.Create(ctx, 0)

// Open for writing
lo, err := los.Open(ctx, oid, pgx.LargeObjectModeWrite)
_, err = io.Copy(lo, someReader) // write data
lo.Close()

// Open for reading
lo, err = los.Open(ctx, oid, pgx.LargeObjectModeRead)
data, err := io.ReadAll(lo)
lo.Close()

// Delete
err = los.Unlink(ctx, oid)

tx.Commit(ctx)

Identifier

type Identifier []string

func (ident Identifier) Sanitize() string

Safely quote and escape PostgreSQL identifiers for use in SQL strings:

tableName := pgx.Identifier{"public", "my_table"}.Sanitize()
// => "public"."my_table"

Low-Level COPY via pgconn

For COPY TO/FROM with raw io.Reader/Writer, use the pgconn level:

// package: github.com/jackc/pgx/v5/pgconn

func (pgConn *PgConn) CopyFrom(ctx context.Context, r io.Reader, sql string) (CommandTag, error)
func (pgConn *PgConn) CopyTo(ctx context.Context, w io.Writer, sql string) (CommandTag, error)

Access via conn.PgConn():

pgConn := conn.PgConn()
ct, err := pgConn.CopyFrom(ctx, csvReader, "COPY users FROM STDIN WITH CSV HEADER")
ct, err = pgConn.CopyTo(ctx, w, "COPY users TO STDOUT WITH CSV HEADER")

Install with Tessl CLI

npx tessl i tessl/golang-github-com-jackc-pgx-v5

docs

batch.md

common-patterns.md

connection-pool.md

copy.md

database-sql.md

direct-connection.md

index.md

querying.md

testing.md

tracing.md

transactions.md

tile.json