The twcache package provides a thread-safe, generic LRU (Least Recently Used) cache implementation with zero external dependencies. It features eviction callbacks, cache statistics, and optimized concurrent access patterns.
go get github.com/olekukonko/tablewriter/pkg/twcacheimport "github.com/olekukonko/tablewriter/pkg/twcache"import "github.com/olekukonko/tablewriter/pkg/twcache"
// Create a cache with capacity of 100 entries
cache := twcache.NewLRU[string, int](100)
// Add entries
cache.Add("key1", 42)
cache.Add("key2", 100)
// Retrieve entries
value, found := cache.Get("key1")
if found {
fmt.Println("Value:", value) // Output: Value: 42
}
// Get or compute pattern (ideal for expensive operations)
result := cache.GetOrCompute("key3", func() int {
// This function is only called if key3 is not in cache
return expensiveComputation()
})
// Check cache statistics
fmt.Printf("Cache size: %d/%d\n", cache.Len(), cache.Cap())
fmt.Printf("Hit rate: %.2f%%\n", cache.HitRate()*100)
// Clear the cache
cache.Purge()Generic cache interface defining the core operations for key-value storage.
type Cache[K comparable, V any] interface {
// Add inserts a new key-value pair into the cache.
// Returns true if an existing entry was evicted to make room.
Add(key K, value V) (evicted bool)
// Get retrieves the value associated with the given key.
// Returns the value and true if found, otherwise zero value and false.
Get(key K) (value V, ok bool)
// Purge clears all entries from the cache.
Purge()
}Type Parameters:
K comparable - Key type must support comparison (==, !=)V any - Value type can be any Go typeThread-safe LRU cache implementation with fixed capacity and full feature set.
type LRU[K comparable, V any] struct {
// contains filtered or unexported fields
}The LRU cache maintains entries in order of recency. When the cache reaches capacity, the least recently used entry is automatically evicted to make room for new entries. All operations are protected by a mutex, making the cache safe for concurrent use.
Thread-Safety: All LRU methods are thread-safe and can be called from multiple goroutines simultaneously.
func NewLRU[K comparable, V any](size int) *LRU[K, V]Creates a new LRU cache with the specified capacity.
Parameters:
size int - Maximum number of entries the cache can holdReturns:
*LRU[K, V] - Pointer to the initialized cache, or nil if size <= 0Behavior:
nil if size <= 0, effectively disabling caching (safe no-op pattern)nil receiver, allowing optional caching patternsfunc NewLRUEvict[K comparable, V any](size int, onEvict EvictCallback[K, V]) *LRU[K, V]Creates a new LRU cache with an optional eviction callback.
Parameters:
size int - Maximum number of entries the cache can holdonEvict EvictCallback[K, V] - Function called when entries are evicted (can be nil)Returns:
*LRU[K, V] - Pointer to the initialized cache, or nil if size <= 0Callback Invocation: The eviction callback is called when:
Purge() is called (callback invoked for all entries)RemoveOldest() is called explicitlyfunc (c *LRU[K, V]) Get(key K) (value V, ok bool)Retrieves a value by key and updates it to most recently used.
Parameters:
key K - The key to look upReturns:
value V - The cached value if found, otherwise zero valueok bool - true if key was found, false otherwiseSide Effects:
func (c *LRU[K, V]) GetOrCompute(key K, compute func() V) VRetrieves a value or computes it if missing, ensuring no duplicate computation under concurrency.
Parameters:
key K - The key to look upcompute func() V - Function to compute the value if not in cacheReturns:
V - The cached or computed valueBehavior:
compute() exactly once if key is missing, even under concurrent accessUse Case: Ideal for expensive computations like width calculations, parsing, or database queries where you want to guarantee single computation per key even with concurrent access.
func (c *LRU[K, V]) Add(key K, value V) (evicted bool)Inserts or updates a key-value pair in the cache.
Parameters:
key K - The key to insert or updatevalue V - The value to storeReturns:
bool - true if an entry was evicted to make room, false otherwiseBehavior:
func (c *LRU[K, V]) Remove(key K) boolManually removes a specific key from the cache.
Parameters:
key K - The key to removeReturns:
bool - true if the key was found and removed, false if key didn't existNote: This does not trigger the eviction callback.
func (c *LRU[K, V]) Purge()Clears all entries from the cache.
Side Effects:
func (c *LRU[K, V]) Len() intReturns the current number of entries in the cache.
Returns:
int - Number of entries currently storedfunc (c *LRU[K, V]) Cap() intReturns the maximum capacity of the cache.
Returns:
int - Maximum number of entries the cache can holdfunc (c *LRU[K, V]) HitRate() float64Returns the cache hit ratio based on all Get operations.
Returns:
float64 - Hit rate as a decimal between 0.0 and 1.0Calculation:
hit_rate = hits / (hits + misses)Note: Only Get() and GetOrCompute() operations affect the hit rate. Add() and other operations do not count toward statistics.
func (c *LRU[K, V]) RemoveOldest() (key K, value V, ok bool)Manually removes and returns the least recently used entry.
Returns:
key K - The key of the removed entryvalue V - The value of the removed entryok bool - true if an entry was removed, false if cache was emptySide Effects:
type EvictCallback[K comparable, V any] func(key K, value V)Function type for eviction callbacks. Called when an entry is removed from the cache due to capacity limits or explicit purge operations.
Parameters:
key K - The key being evictedvalue V - The value being evictedUse Cases:
// Create a simple string-to-int cache
cache := twcache.NewLRU[string, int](10)
// Add some entries
cache.Add("one", 1)
cache.Add("two", 2)
cache.Add("three", 3)
// Retrieve values
if val, ok := cache.Get("two"); ok {
fmt.Println("Found:", val) // Output: Found: 2
}
// Check if key exists
if _, ok := cache.Get("four"); !ok {
fmt.Println("Key 'four' not found")
}
// Remove specific entry
if removed := cache.Remove("one"); removed {
fmt.Println("Removed 'one'")
}// Track evictions for logging or cleanup
evictionLog := make([]string, 0)
callback := func(key string, value int) {
evictionLog = append(evictionLog,
fmt.Sprintf("Evicted: %s=%d", key, value))
}
cache := twcache.NewLRUEvict[string, int](3, callback)
cache.Add("a", 1)
cache.Add("b", 2)
cache.Add("c", 3)
// This will trigger eviction of "a" (least recently used)
cache.Add("d", 4)
fmt.Println(evictionLog) // Output: [Evicted: a=1]// Automatic cleanup of file handles
cache := twcache.NewLRUEvict[string, *os.File](10,
func(path string, file *os.File) {
if file != nil {
file.Close()
log.Printf("Closed file: %s", path)
}
},
)
// Add files to cache
file, _ := os.Open("data.txt")
cache.Add("data.txt", file)
// Files are automatically closed when evicted
cache.Purge() // Closes all open filestype User struct {
ID int
Name string
Data []byte
}
// Cache expensive database queries
userCache := twcache.NewLRU[int, *User](1000)
func GetUser(id int) *User {
return userCache.GetOrCompute(id, func() *User {
// This expensive query only runs on cache miss
return queryDatabaseForUser(id)
})
}
// Multiple concurrent calls only query database once
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func() {
defer wg.Done()
user := GetUser(42) // All goroutines get same result
fmt.Println(user.Name)
}()
}
wg.Wait()cache := twcache.NewLRU[string, []byte](500)
// Simulate cache usage
for i := 0; i < 1000; i++ {
key := fmt.Sprintf("key-%d", i%750) // 75% unique keys
if val, ok := cache.Get(key); ok {
// Cache hit
_ = val
} else {
// Cache miss - compute and add
data := []byte(fmt.Sprintf("data-%d", i))
cache.Add(key, data)
}
}
// Report statistics
fmt.Printf("Cache Statistics:\n")
fmt.Printf(" Size: %d/%d (%.1f%% full)\n",
cache.Len(), cache.Cap(),
float64(cache.Len())/float64(cache.Cap())*100)
fmt.Printf(" Hit Rate: %.2f%%\n", cache.HitRate()*100)// Cache can be conditionally disabled
func NewCache(enabled bool) *twcache.LRU[string, int] {
if enabled {
return twcache.NewLRU[string, int](100)
}
return nil // Disabled cache
}
// All operations safely handle nil cache
cache := NewCache(false)
// These are all safe no-ops when cache is nil
cache.Add("key", 123)
val, ok := cache.Get("key") // Returns (0, false)
cache.Purge()
fmt.Println(cache.Len()) // Returns 0
fmt.Println(cache.HitRate()) // Returns 0.0// Cache can work with any comparable key and any value type
type Product struct {
ID string
Name string
Price float64
}
// String keys, struct values
productCache := twcache.NewLRU[string, Product](50)
productCache.Add("SKU-123", Product{
ID: "SKU-123",
Name: "Widget",
Price: 29.99,
})
// Int keys, slice values
dataCache := twcache.NewLRU[int, []byte](100)
dataCache.Add(1, []byte("data"))
// Struct keys (must be comparable), pointer values
type CacheKey struct {
UserID int
Region string
}
type Session struct {
Token string
ExpiresAt time.Time
}
sessionCache := twcache.NewLRU[CacheKey, *Session](1000)
sessionCache.Add(
CacheKey{UserID: 42, Region: "us-west"},
&Session{Token: "abc123", ExpiresAt: time.Now().Add(time.Hour)},
)// Demonstrate LRU eviction order
cache := twcache.NewLRU[string, int](3)
cache.Add("a", 1) // Order: [a]
cache.Add("b", 2) // Order: [b, a]
cache.Add("c", 3) // Order: [c, b, a]
// Access "a" to make it most recently used
cache.Get("a") // Order: [a, c, b]
// Adding new entry evicts "b" (least recently used)
cache.Add("d", 4) // Order: [d, a, c], "b" evicted
// Verify eviction
if _, ok := cache.Get("b"); !ok {
fmt.Println("'b' was evicted (least recently used)")
}
// Verify remaining entries
for _, key := range []string{"a", "c", "d"} {
if val, ok := cache.Get(key); ok {
fmt.Printf("%s = %d\n", key, val)
}
}The LRU cache implementation is fully thread-safe:
nil receiver, making optional caching patterns simplePerformance Characteristics:
Note: The cache uses a single mutex for all operations. For extremely high-contention scenarios, consider using multiple cache instances sharded by key hash.
The nil-safe design allows clean optional caching:
type Service struct {
cache *twcache.LRU[string, []byte]
}
func NewService(cacheSize int) *Service {
return &Service{
cache: twcache.NewLRU[string, []byte](cacheSize),
}
}
func (s *Service) Process(key string) []byte {
// Works whether cache is enabled or nil
return s.cache.GetOrCompute(key, func() []byte {
return s.expensiveOperation(key)
})
}Combine with GetOrCompute for lazy initialization:
var compilerCache = twcache.NewLRU[string, *regexp.Regexp](100)
func GetRegex(pattern string) *regexp.Regexp {
return compilerCache.GetOrCompute(pattern, func() *regexp.Regexp {
re, _ := regexp.Compile(pattern)
return re
})
}Classic cache-aside implementation:
func GetData(id string) (Data, error) {
// Try cache first
if data, ok := dataCache.Get(id); ok {
return data, nil
}
// Cache miss - fetch from source
data, err := fetchFromDatabase(id)
if err != nil {
return Data{}, err
}
// Store in cache
dataCache.Add(id, data)
return data, nil
}