CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-django-prometheus

Django middlewares to monitor your application with Prometheus.io

Pending
Overview
Eval results
Files

cache-monitoring.mddocs/

Cache Monitoring

Monitor Django cache operations (get/hit/miss/fail) through instrumented cache backends for various cache systems. Provides visibility into cache performance and usage patterns.

Capabilities

Instrumented Cache Backends

Django-prometheus provides instrumented versions of Django's cache backends that automatically collect metrics on cache operations.

# File-based cache backend
django_prometheus.cache.backends.filebased.FileBasedCache

# Local memory cache backend
django_prometheus.cache.backends.locmem.LocMemCache

# Memcached backends
django_prometheus.cache.backends.memcached.PyMemcacheCache
django_prometheus.cache.backends.memcached.PyLibMCCache

# Redis cache backends
django_prometheus.cache.backends.redis.RedisCache
django_prometheus.cache.backends.redis.NativeRedisCache

# Django Memcached Consul backend
django_prometheus.cache.backends.django_memcached_consul.MemcachedCache

Cache Metrics Functions

The instrumented backends automatically track cache operations through method interception:

class CacheBackendMixin:
    """
    Mixin that adds Prometheus monitoring to Django cache backends.
    Automatically applied to all instrumented cache backends.
    """
    
    def get(self, key, default=None, version=None):
        """
        Instrumented cache get operation.
        Tracks get attempts, hits, and misses.
        
        Parameters:
        - key: cache key
        - default: default value if key not found
        - version: cache key version
        
        Returns:
        Cached value or default
        """
    
    def set(self, key, value, timeout=None, version=None):
        """
        Instrumented cache set operation.
        
        Parameters:
        - key: cache key
        - value: value to cache
        - timeout: cache timeout in seconds
        - version: cache key version
        """
    
    def delete(self, key, version=None):
        """
        Instrumented cache delete operation.
        
        Parameters:
        - key: cache key to delete
        - version: cache key version
        """

Monitored Metrics

Cache Operation Metrics

  • django_cache_get_total: Total get requests on cache by backend
  • django_cache_get_hits_total: Total cache hits by backend
  • django_cache_get_misses_total: Total cache misses by backend
  • django_cache_get_fail_total: Total get request failures by backend

Configuration

Settings Configuration

Configure cache backends in Django settings:

# settings.py
CACHES = {
    'default': {
        'BACKEND': 'django_prometheus.cache.backends.redis.RedisCache',  # Instead of 'django.core.cache.backends.redis.RedisCache'
        'LOCATION': 'redis://127.0.0.1:6379/1',
    },
    'memcached': {
        'BACKEND': 'django_prometheus.cache.backends.memcached.PyMemcacheCache',  # Instead of 'django.core.cache.backends.memcached.PyMemcacheCache'
        'LOCATION': '127.0.0.1:11211',
    },
    'file_cache': {
        'BACKEND': 'django_prometheus.cache.backends.filebased.FileBasedCache',  # Instead of 'django.core.cache.backends.filebased.FileBasedCache'
        'LOCATION': '/var/tmp/django_cache',
    },
    'local_mem': {
        'BACKEND': 'django_prometheus.cache.backends.locmem.LocMemCache',  # Instead of 'django.core.cache.backends.locmem.LocMemCache'
        'LOCATION': 'unique-snowflake',
    }
}

Multiple Cache Configuration

CACHES = {
    'default': {
        'BACKEND': 'django_prometheus.cache.backends.redis.RedisCache',
        'LOCATION': 'redis://127.0.0.1:6379/0',
    },
    'sessions': {
        'BACKEND': 'django_prometheus.cache.backends.redis.RedisCache',
        'LOCATION': 'redis://127.0.0.1:6379/1',
    },
    'api_cache': {
        'BACKEND': 'django_prometheus.cache.backends.memcached.PyMemcacheCache',
        'LOCATION': '127.0.0.1:11211',
    }
}

# Each cache backend will be tracked separately with its backend name as a label

Usage Examples

Basic Cache Monitoring

from django.core.cache import cache

# These operations will be automatically tracked in cache metrics:
cache.set('user:123', user_data, timeout=300)  # Cache set (not directly tracked)
user_data = cache.get('user:123')  # Cache get - tracked as hit if found, miss if not
cache.delete('user:123')  # Cache delete (not directly tracked)

# Get with default
user_data = cache.get('user:456', default={})  # Tracked as hit/miss

Multiple Cache Usage

from django.core.cache import caches

# Use specific cache backends
default_cache = caches['default']
session_cache = caches['sessions']
api_cache = caches['api_cache']

# Each will be tracked separately by backend
default_cache.get('key1')  # Tracked under 'redis' backend
session_cache.get('session:abc')  # Tracked under 'redis' backend  
api_cache.get('api:endpoint:123')  # Tracked under 'memcached' backend

Cache Decorators

from django.views.decorators.cache import cache_page
from django.core.cache import cache

@cache_page(60 * 15)  # Cache for 15 minutes
def my_view(request):
    # View caching will be tracked in cache metrics
    return HttpResponse("Cached content")

# Manual caching in views
def api_view(request):
    cache_key = f"api:data:{request.user.id}"
    data = cache.get(cache_key)  # Tracked as hit/miss
    
    if data is None:
        data = expensive_computation()
        cache.set(cache_key, data, timeout=300)
    
    return JsonResponse(data)

Cache Template Tags

<!-- Template fragment caching is also tracked -->
{% load cache %}
{% cache 500 my_cache_key user.id %}
    <!-- Expensive template rendering -->
    {% for item in expensive_queryset %}
        <div>{{ item.name }}</div>
    {% endfor %}
{% endcache %}

Metric Labels

All cache metrics include the following label:

  • backend: Cache backend type (e.g., 'redis', 'memcached', 'filebased', 'locmem')

Supported Cache Backends

Redis

  • RedisCache: Standard Redis cache backend
  • Tracks all Redis cache operations
  • Supports Redis clustering and sentinel configurations

Memcached

  • PyMemcacheCache: Pure Python memcached client
  • PyLibMCCache: libmemcached-based client
  • MemcachedCache: Memcached with Consul discovery

File-based

  • FileBasedCache: File system-based caching
  • Suitable for single-server deployments

Local Memory

  • LocMemCache: In-process memory caching
  • Per-process cache, not shared across workers

Performance Considerations

  • Cache monitoring adds minimal overhead to cache operations
  • Hit/miss detection is efficient and doesn't impact cache performance
  • Backend labeling allows for performance analysis across different cache systems
  • No impact on cache timeout or eviction policies

Cache Hit Rate Analysis

Use the metrics to calculate cache hit rates:

# Hit rate calculation:
# hit_rate = django_cache_get_hits_total / django_cache_get_total

# Miss rate calculation:  
# miss_rate = django_cache_get_misses_total / django_cache_get_total

# Failure rate calculation:
# failure_rate = django_cache_get_fail_total / django_cache_get_total

Common Cache Patterns Monitored

  • View caching: Automatic tracking of Django's cache framework
  • Template fragment caching: Tracked through cache backend calls
  • Session caching: When using cache-based session storage
  • Database query caching: Manual cache operations in ORM
  • API response caching: Custom cache operations in views
  • Static file caching: When using cache backends for static content

Install with Tessl CLI

npx tessl i tessl/pypi-django-prometheus

docs

cache-monitoring.md

database-monitoring.md

http-monitoring.md

index.md

metrics-export.md

migration-monitoring.md

model-monitoring.md

testing-utilities.md

tile.json