The rate limiter implements a token bucket algorithm to control the frequency of events. It maintains a bucket of tokens that refills at a specified rate, allowing bursts while enforcing an average rate limit.
import "golang.org/x/time/rate"type Limit float64Defines the maximum frequency of events as number of events per second. A zero Limit allows no events.
Constants:
const Inf = Limit(math.MaxFloat64)The infinite rate limit; allows all events (even if burst is zero).
Associated Function:
func Every(interval time.Duration) LimitConverts a minimum time interval between events to a Limit. If interval <= 0, returns Inf.
Example:
// Allow one event every 100 milliseconds (10 events/second)
limit := rate.Every(100 * time.Millisecond)
// Equivalent to:
// limit := rate.Limit(10)type Limiter struct {
// Has unexported fields
}A Limiter controls how frequently events are allowed to happen. It implements a "token bucket" of size b, initially full and refilled at rate r tokens per second.
Key Properties:
NewLimiter instead)Allow, Reserve, and Waitr == Inf, burst size b is ignoredConstructor:
func NewLimiter(r Limit, b int) *LimiterReturns a new Limiter that allows events up to rate r and permits bursts of at most b tokens.
Parameters:
r: Rate limit (events per second)b: Maximum burst size (maximum tokens)Example:
// Allow 100 requests per second with burst of 50
limiter := rate.NewLimiter(100, 50)
// Allow one request every 10 milliseconds with no burst
limiter := rate.NewLimiter(rate.Every(10*time.Millisecond), 1)
// Allow unlimited rate
limiter := rate.NewLimiter(rate.Inf, 0)func (lim *Limiter) Limit() LimitReturns the maximum overall event rate.
func (lim *Limiter) Burst() intReturns the maximum burst size. Burst is the maximum number of tokens that can be consumed in a single call to Allow, Reserve, or Wait. Higher burst values allow more events to happen at once. A zero burst allows no events, unless limit == Inf.
func (lim *Limiter) Tokens() float64Returns the number of tokens currently available (shorthand for TokensAt(time.Now())).
func (lim *Limiter) TokensAt(t time.Time) float64Returns the number of tokens available at time t.
Use when you want to drop/skip events that exceed the rate limit.
func (lim *Limiter) Allow() boolReports whether an event may happen now. Shorthand for AllowN(time.Now(), 1).
Returns: true if the event is allowed, false otherwise.
Example:
limiter := rate.NewLimiter(10, 5)
if limiter.Allow() {
// Event is allowed, proceed
processRequest()
} else {
// Rate limit exceeded, drop request
http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
}func (lim *Limiter) AllowN(t time.Time, n int) boolReports whether n events may happen at time t. Use this method if you intend to drop/skip events that exceed the rate limit. Otherwise use Reserve or Wait.
Parameters:
t: The time to checkn: Number of tokens to consumeReturns: true if n tokens are available, false otherwise.
Example:
// Check if we can send a batch of 10 items
if limiter.AllowN(time.Now(), 10) {
sendBatch(items)
} else {
// Split into smaller batches or drop
log.Println("Batch too large for current rate limit")
}Use when you want to wait but need to calculate the delay duration yourself or cancel the reservation.
func (lim *Limiter) Reserve() *ReservationShorthand for ReserveN(time.Now(), 1). Returns a Reservation that indicates how long the caller must wait.
func (lim *Limiter) ReserveN(t time.Time, n int) *ReservationReturns a Reservation that indicates how long the caller must wait before n events happen. The Limiter takes this Reservation into account when allowing future events. The returned Reservation's OK() method returns false if n exceeds the Limiter's burst size.
Parameters:
t: The time of the reservationn: Number of tokens to reserveReturns: A Reservation object (never nil)
Usage Pattern:
r := limiter.ReserveN(time.Now(), 1)
if !r.OK() {
// Not allowed to act! Did you set burst > 0?
return
}
time.Sleep(r.Delay())
// Now perform the action
performAction()When to use:
Use when you want to block until tokens are available. This is the most common pattern.
func (lim *Limiter) Wait(ctx context.Context) (err error)Blocks until the limiter permits an event to happen. Shorthand for WaitN(ctx, 1).
Parameters:
ctx: Context for cancellation and deadlinesReturns:
nil if the event is allowedExample:
limiter := rate.NewLimiter(10, 5)
ctx := context.Background()
if err := limiter.Wait(ctx); err != nil {
log.Printf("Rate limiter error: %v", err)
return
}
// Proceed with rate-limited operation
processRequest()With Context Timeout:
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := limiter.Wait(ctx); err != nil {
if errors.Is(err, context.DeadlineExceeded) {
return errors.New("timeout waiting for rate limit")
}
return err
}func (lim *Limiter) WaitN(ctx context.Context, n int) (err error)Blocks until the limiter permits n events to happen.
Parameters:
ctx: Context for cancellation and deadlinesn: Number of tokens to wait forReturns:
nil if the events are allowedn exceeds burst size, context is canceled, or wait time exceeds deadlineNote: The burst limit is ignored if the rate limit is Inf.
Example:
// Wait for permission to send a batch of items
if err := limiter.WaitN(ctx, len(batch)); err != nil {
return fmt.Errorf("batch too large or timeout: %w", err)
}
sendBatch(batch)func (lim *Limiter) SetLimit(newLimit Limit)Sets a new rate limit. Shorthand for SetLimitAt(time.Now(), newLimit).
Example:
// Dynamically adjust rate based on system load
if highLoad {
limiter.SetLimit(rate.Limit(50)) // Reduce to 50/sec
} else {
limiter.SetLimit(rate.Limit(100)) // Increase to 100/sec
}func (lim *Limiter) SetLimitAt(t time.Time, newLimit Limit)Sets a new Limit for the limiter at time t. The new Limit and burst may be violated or underutilized by those which reserved (using Reserve or Wait) but did not yet act before SetLimitAt was called.
func (lim *Limiter) SetBurst(newBurst int)Sets a new burst size. Shorthand for SetBurstAt(time.Now(), newBurst).
func (lim *Limiter) SetBurstAt(t time.Time, newBurst int)Sets a new burst size for the limiter at time t.
type Reservation struct {
// Has unexported fields
}A Reservation holds information about events that are permitted by a Limiter to happen after a delay. A Reservation may be canceled, which may enable the Limiter to permit additional events.
Constant:
const InfDuration = time.Duration(math.MaxInt64)The duration returned by Delay when a Reservation is not OK.
func (r *Reservation) OK() boolReturns whether the limiter can provide the requested number of tokens within the maximum wait time. If OK() is false, Delay() returns InfDuration, and Cancel() does nothing.
func (r *Reservation) Delay() time.DurationReturns the duration for which the reservation holder must wait before taking the reserved action. Shorthand for DelayFrom(time.Now()).
Returns:
0 if tokens are immediately availableInfDuration if reservation is not OKfunc (r *Reservation) DelayFrom(t time.Time) time.DurationReturns the duration for which the reservation holder must wait before taking the reserved action, calculated from time t.
Returns:
0 if tokens are immediately available or t is after the reservation timeInfDuration if reservation is not OKfunc (r *Reservation) Cancel()Indicates that the reservation holder will not perform the reserved action. Shorthand for CancelAt(time.Now()).
Effect: Reverses the effects of the Reservation on the rate limit as much as possible, potentially allowing other events to proceed sooner.
func (r *Reservation) CancelAt(t time.Time)Indicates that the reservation holder will not perform the reserved action and reverses the effects of this Reservation on the rate limit as much as possible, considering that other reservations may have already been made.
Parameters:
t: The time of cancellationExample:
r := limiter.Reserve()
if !r.OK() {
return errors.New("rate limit exceeded")
}
// Try to acquire a resource
resource, err := acquireResource()
if err != nil {
// Cancel the reservation since we won't use it
r.Cancel()
return err
}
// Wait for the rate limit
time.Sleep(r.Delay())
// Use the resource
useResource(resource)import (
"context"
"golang.org/x/time/rate"
)
func handleRequest(limiter *rate.Limiter) error {
// Wait blocks until request is allowed
if err := limiter.Wait(context.Background()); err != nil {
return err
}
// Process request
return processRequest()
}func tryHandle(limiter *rate.Limiter) error {
if !limiter.Allow() {
return errors.New("rate limit exceeded")
}
return processRequest()
}func handleWithTimeout(limiter *rate.Limiter) error {
r := limiter.Reserve()
if !r.OK() {
return errors.New("rate limit exceeded")
}
delay := r.Delay()
if delay > 5*time.Second {
r.Cancel()
return errors.New("wait time too long")
}
time.Sleep(delay)
return processRequest()
}type RateLimiter struct {
limiters sync.Map // map[string]*rate.Limiter
}
func (rl *RateLimiter) GetLimiter(userID string) *rate.Limiter {
limiter, _ := rl.limiters.LoadOrStore(userID, rate.NewLimiter(10, 5))
return limiter.(*rate.Limiter)
}
func (rl *RateLimiter) Allow(userID string) bool {
return rl.GetLimiter(userID).Allow()
}type AdaptiveRateLimiter struct {
limiter *rate.Limiter
mu sync.Mutex
}
func (arl *AdaptiveRateLimiter) AdjustRate(newRate float64) {
arl.mu.Lock()
defer arl.mu.Unlock()
arl.limiter.SetLimit(rate.Limit(newRate))
}
func (arl *AdaptiveRateLimiter) MonitorLoad() {
ticker := time.NewTicker(1 * time.Minute)
for range ticker.C {
load := getSystemLoad()
if load > 0.8 {
arl.AdjustRate(50) // Reduce rate under high load
} else {
arl.AdjustRate(100) // Normal rate
}
}
}// Allow bursts of up to 100 requests, but maintain 10/sec average
limiter := rate.NewLimiter(10, 100)
func handleBurst(requests []Request) {
for _, req := range requests {
if err := limiter.Wait(context.Background()); err != nil {
log.Printf("Rate limit error: %v", err)
continue
}
handleRequest(req)
}
}All methods on Limiter and Reservation are safe for concurrent use by multiple goroutines.
Cause: Attempting to wait for more tokens than the burst size allows.
Solution: Either increase the burst size or split the operation into smaller chunks:
// Option 1: Increase burst
limiter := rate.NewLimiter(10, 100) // burst = 100
// Option 2: Split operation
for i := 0; i < n; i += burst {
count := min(burst, n-i)
if err := limiter.WaitN(ctx, count); err != nil {
return err
}
processChunk(i, count)
}Cause: The wait time required exceeds the context deadline.
Solution: Use a longer deadline or handle the error appropriately:
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
if err := limiter.Wait(ctx); err != nil {
if errors.Is(err, context.DeadlineExceeded) {
// Handle timeout
return errors.New("rate limit wait timeout")
}
return err
}