or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

index.mdmain-package.mdrenderer-package.mdtw-package.mdtwcache-package.mdtwwarp-package.mdtwwidth-package.md
tile.json

twcache-package.mddocs/

twcache Package

The twcache package provides a thread-safe, generic LRU (Least Recently Used) cache implementation with zero external dependencies. It features eviction callbacks, cache statistics, and optimized concurrent access patterns.

Package Information

  • Package Name: twcache
  • Full Path: github.com/olekukonko/tablewriter/pkg/twcache
  • Language: Go
  • Minimum Go Version: 1.18 (requires generics)
  • Installation: go get github.com/olekukonko/tablewriter/pkg/twcache

Core Imports

import "github.com/olekukonko/tablewriter/pkg/twcache"

Basic Usage

import "github.com/olekukonko/tablewriter/pkg/twcache"

// Create a cache with capacity of 100 entries
cache := twcache.NewLRU[string, int](100)

// Add entries
cache.Add("key1", 42)
cache.Add("key2", 100)

// Retrieve entries
value, found := cache.Get("key1")
if found {
    fmt.Println("Value:", value) // Output: Value: 42
}

// Get or compute pattern (ideal for expensive operations)
result := cache.GetOrCompute("key3", func() int {
    // This function is only called if key3 is not in cache
    return expensiveComputation()
})

// Check cache statistics
fmt.Printf("Cache size: %d/%d\n", cache.Len(), cache.Cap())
fmt.Printf("Hit rate: %.2f%%\n", cache.HitRate()*100)

// Clear the cache
cache.Purge()

Capabilities

Cache Interface

Generic cache interface defining the core operations for key-value storage.

type Cache[K comparable, V any] interface {
    // Add inserts a new key-value pair into the cache.
    // Returns true if an existing entry was evicted to make room.
    Add(key K, value V) (evicted bool)

    // Get retrieves the value associated with the given key.
    // Returns the value and true if found, otherwise zero value and false.
    Get(key K) (value V, ok bool)

    // Purge clears all entries from the cache.
    Purge()
}

Type Parameters:

  • K comparable - Key type must support comparison (==, !=)
  • V any - Value type can be any Go type

LRU Cache

Thread-safe LRU cache implementation with fixed capacity and full feature set.

type LRU[K comparable, V any] struct {
    // contains filtered or unexported fields
}

The LRU cache maintains entries in order of recency. When the cache reaches capacity, the least recently used entry is automatically evicted to make room for new entries. All operations are protected by a mutex, making the cache safe for concurrent use.

Thread-Safety: All LRU methods are thread-safe and can be called from multiple goroutines simultaneously.

Creating LRU Caches

Basic Constructor

func NewLRU[K comparable, V any](size int) *LRU[K, V]

Creates a new LRU cache with the specified capacity.

Parameters:

  • size int - Maximum number of entries the cache can hold

Returns:

  • *LRU[K, V] - Pointer to the initialized cache, or nil if size <= 0

Behavior:

  • Returns nil if size <= 0, effectively disabling caching (safe no-op pattern)
  • Automatically caps size at 100,000 entries for reasonableness
  • All methods safely handle nil receiver, allowing optional caching patterns

Constructor with Eviction Callback

func NewLRUEvict[K comparable, V any](size int, onEvict EvictCallback[K, V]) *LRU[K, V]

Creates a new LRU cache with an optional eviction callback.

Parameters:

  • size int - Maximum number of entries the cache can hold
  • onEvict EvictCallback[K, V] - Function called when entries are evicted (can be nil)

Returns:

  • *LRU[K, V] - Pointer to the initialized cache, or nil if size <= 0

Callback Invocation: The eviction callback is called when:

  • Cache reaches capacity and needs to evict the least recently used entry
  • Purge() is called (callback invoked for all entries)
  • RemoveOldest() is called explicitly

Cache Operations

Get

func (c *LRU[K, V]) Get(key K) (value V, ok bool)

Retrieves a value by key and updates it to most recently used.

Parameters:

  • key K - The key to look up

Returns:

  • value V - The cached value if found, otherwise zero value
  • ok bool - true if key was found, false otherwise

Side Effects:

  • Updates the entry to most recently used position on cache hit
  • Increments hit counter on success, miss counter on failure

GetOrCompute

func (c *LRU[K, V]) GetOrCompute(key K, compute func() V) V

Retrieves a value or computes it if missing, ensuring no duplicate computation under concurrency.

Parameters:

  • key K - The key to look up
  • compute func() V - Function to compute the value if not in cache

Returns:

  • V - The cached or computed value

Behavior:

  • Returns cached value immediately if key exists (cache hit)
  • Calls compute() exactly once if key is missing, even under concurrent access
  • Stores the computed result in the cache before returning
  • Thread-safe: prevents thundering herd problem where multiple goroutines compute the same value

Use Case: Ideal for expensive computations like width calculations, parsing, or database queries where you want to guarantee single computation per key even with concurrent access.

Add

func (c *LRU[K, V]) Add(key K, value V) (evicted bool)

Inserts or updates a key-value pair in the cache.

Parameters:

  • key K - The key to insert or update
  • value V - The value to store

Returns:

  • bool - true if an entry was evicted to make room, false otherwise

Behavior:

  • If key exists, updates the value and moves to most recently used
  • If key is new and cache is full, evicts least recently used entry
  • New entries are always placed at most recently used position

Remove

func (c *LRU[K, V]) Remove(key K) bool

Manually removes a specific key from the cache.

Parameters:

  • key K - The key to remove

Returns:

  • bool - true if the key was found and removed, false if key didn't exist

Note: This does not trigger the eviction callback.

Purge

func (c *LRU[K, V]) Purge()

Clears all entries from the cache.

Side Effects:

  • Calls eviction callback for each entry if callback is set
  • Resets hit and miss counters to zero
  • Deallocates internal map and resets linked list

Cache Statistics

Len

func (c *LRU[K, V]) Len() int

Returns the current number of entries in the cache.

Returns:

  • int - Number of entries currently stored

Cap

func (c *LRU[K, V]) Cap() int

Returns the maximum capacity of the cache.

Returns:

  • int - Maximum number of entries the cache can hold

HitRate

func (c *LRU[K, V]) HitRate() float64

Returns the cache hit ratio based on all Get operations.

Returns:

  • float64 - Hit rate as a decimal between 0.0 and 1.0

Calculation:

hit_rate = hits / (hits + misses)

Note: Only Get() and GetOrCompute() operations affect the hit rate. Add() and other operations do not count toward statistics.

Advanced Operations

RemoveOldest

func (c *LRU[K, V]) RemoveOldest() (key K, value V, ok bool)

Manually removes and returns the least recently used entry.

Returns:

  • key K - The key of the removed entry
  • value V - The value of the removed entry
  • ok bool - true if an entry was removed, false if cache was empty

Side Effects:

  • Calls eviction callback if set
  • Removes entry from cache

Types

EvictCallback

type EvictCallback[K comparable, V any] func(key K, value V)

Function type for eviction callbacks. Called when an entry is removed from the cache due to capacity limits or explicit purge operations.

Parameters:

  • key K - The key being evicted
  • value V - The value being evicted

Use Cases:

  • Cleanup of resources associated with cached values
  • Logging eviction events
  • Tracking cache behavior for debugging
  • Triggering persistence operations

Usage Examples

Basic Cache Operations

// Create a simple string-to-int cache
cache := twcache.NewLRU[string, int](10)

// Add some entries
cache.Add("one", 1)
cache.Add("two", 2)
cache.Add("three", 3)

// Retrieve values
if val, ok := cache.Get("two"); ok {
    fmt.Println("Found:", val) // Output: Found: 2
}

// Check if key exists
if _, ok := cache.Get("four"); !ok {
    fmt.Println("Key 'four' not found")
}

// Remove specific entry
if removed := cache.Remove("one"); removed {
    fmt.Println("Removed 'one'")
}

Cache with Eviction Callback

// Track evictions for logging or cleanup
evictionLog := make([]string, 0)
callback := func(key string, value int) {
    evictionLog = append(evictionLog,
        fmt.Sprintf("Evicted: %s=%d", key, value))
}

cache := twcache.NewLRUEvict[string, int](3, callback)

cache.Add("a", 1)
cache.Add("b", 2)
cache.Add("c", 3)

// This will trigger eviction of "a" (least recently used)
cache.Add("d", 4)

fmt.Println(evictionLog) // Output: [Evicted: a=1]

Resource Management with Eviction Callback

// Automatic cleanup of file handles
cache := twcache.NewLRUEvict[string, *os.File](10,
    func(path string, file *os.File) {
        if file != nil {
            file.Close()
            log.Printf("Closed file: %s", path)
        }
    },
)

// Add files to cache
file, _ := os.Open("data.txt")
cache.Add("data.txt", file)

// Files are automatically closed when evicted
cache.Purge() // Closes all open files

GetOrCompute Pattern

type User struct {
    ID   int
    Name string
    Data []byte
}

// Cache expensive database queries
userCache := twcache.NewLRU[int, *User](1000)

func GetUser(id int) *User {
    return userCache.GetOrCompute(id, func() *User {
        // This expensive query only runs on cache miss
        return queryDatabaseForUser(id)
    })
}

// Multiple concurrent calls only query database once
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
    wg.Add(1)
    go func() {
        defer wg.Done()
        user := GetUser(42) // All goroutines get same result
        fmt.Println(user.Name)
    }()
}
wg.Wait()

Cache Statistics Monitoring

cache := twcache.NewLRU[string, []byte](500)

// Simulate cache usage
for i := 0; i < 1000; i++ {
    key := fmt.Sprintf("key-%d", i%750) // 75% unique keys

    if val, ok := cache.Get(key); ok {
        // Cache hit
        _ = val
    } else {
        // Cache miss - compute and add
        data := []byte(fmt.Sprintf("data-%d", i))
        cache.Add(key, data)
    }
}

// Report statistics
fmt.Printf("Cache Statistics:\n")
fmt.Printf("  Size: %d/%d (%.1f%% full)\n",
    cache.Len(), cache.Cap(),
    float64(cache.Len())/float64(cache.Cap())*100)
fmt.Printf("  Hit Rate: %.2f%%\n", cache.HitRate()*100)

Disabled Cache Pattern

// Cache can be conditionally disabled
func NewCache(enabled bool) *twcache.LRU[string, int] {
    if enabled {
        return twcache.NewLRU[string, int](100)
    }
    return nil // Disabled cache
}

// All operations safely handle nil cache
cache := NewCache(false)

// These are all safe no-ops when cache is nil
cache.Add("key", 123)
val, ok := cache.Get("key") // Returns (0, false)
cache.Purge()
fmt.Println(cache.Len())    // Returns 0
fmt.Println(cache.HitRate()) // Returns 0.0

Type-Safe Generic Cache

// Cache can work with any comparable key and any value type
type Product struct {
    ID    string
    Name  string
    Price float64
}

// String keys, struct values
productCache := twcache.NewLRU[string, Product](50)
productCache.Add("SKU-123", Product{
    ID:    "SKU-123",
    Name:  "Widget",
    Price: 29.99,
})

// Int keys, slice values
dataCache := twcache.NewLRU[int, []byte](100)
dataCache.Add(1, []byte("data"))

// Struct keys (must be comparable), pointer values
type CacheKey struct {
    UserID int
    Region string
}

type Session struct {
    Token     string
    ExpiresAt time.Time
}

sessionCache := twcache.NewLRU[CacheKey, *Session](1000)
sessionCache.Add(
    CacheKey{UserID: 42, Region: "us-west"},
    &Session{Token: "abc123", ExpiresAt: time.Now().Add(time.Hour)},
)

LRU Behavior Example

// Demonstrate LRU eviction order
cache := twcache.NewLRU[string, int](3)

cache.Add("a", 1) // Order: [a]
cache.Add("b", 2) // Order: [b, a]
cache.Add("c", 3) // Order: [c, b, a]

// Access "a" to make it most recently used
cache.Get("a")    // Order: [a, c, b]

// Adding new entry evicts "b" (least recently used)
cache.Add("d", 4) // Order: [d, a, c], "b" evicted

// Verify eviction
if _, ok := cache.Get("b"); !ok {
    fmt.Println("'b' was evicted (least recently used)")
}

// Verify remaining entries
for _, key := range []string{"a", "c", "d"} {
    if val, ok := cache.Get(key); ok {
        fmt.Printf("%s = %d\n", key, val)
    }
}

Thread-Safety Guarantees

The LRU cache implementation is fully thread-safe:

  • Mutual Exclusion: All operations that modify cache state are protected by a mutex
  • Atomic Counters: Hit and miss statistics use atomic operations for lock-free reads
  • Safe Concurrent Access: Multiple goroutines can safely call any combination of cache methods
  • GetOrCompute Safety: Prevents duplicate computation even when multiple goroutines request the same missing key simultaneously
  • Nil Safety: All methods safely handle nil receiver, making optional caching patterns simple

Performance Characteristics:

  • Get operations: O(1) average time complexity
  • Add operations: O(1) average time complexity
  • Eviction: O(1) time complexity
  • Lock contention: Minimized through atomic counters for statistics

Note: The cache uses a single mutex for all operations. For extremely high-contention scenarios, consider using multiple cache instances sharded by key hash.

Design Patterns

Optional Caching

The nil-safe design allows clean optional caching:

type Service struct {
    cache *twcache.LRU[string, []byte]
}

func NewService(cacheSize int) *Service {
    return &Service{
        cache: twcache.NewLRU[string, []byte](cacheSize),
    }
}

func (s *Service) Process(key string) []byte {
    // Works whether cache is enabled or nil
    return s.cache.GetOrCompute(key, func() []byte {
        return s.expensiveOperation(key)
    })
}

Lazy Initialization

Combine with GetOrCompute for lazy initialization:

var compilerCache = twcache.NewLRU[string, *regexp.Regexp](100)

func GetRegex(pattern string) *regexp.Regexp {
    return compilerCache.GetOrCompute(pattern, func() *regexp.Regexp {
        re, _ := regexp.Compile(pattern)
        return re
    })
}

Cache-Aside Pattern

Classic cache-aside implementation:

func GetData(id string) (Data, error) {
    // Try cache first
    if data, ok := dataCache.Get(id); ok {
        return data, nil
    }

    // Cache miss - fetch from source
    data, err := fetchFromDatabase(id)
    if err != nil {
        return Data{}, err
    }

    // Store in cache
    dataCache.Add(id, data)
    return data, nil
}