CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-langchain-tests

Standard tests for LangChain implementations

Pending
Overview
Eval results
Files

cache-testing.mddocs/

Cache Testing

Testing suites for LangChain cache implementations with both synchronous and asynchronous support. Cache tests cover cache hits, misses, updates, clearing operations, and multi-generation caching patterns essential for LLM response caching.

Capabilities

Synchronous Cache Testing

Comprehensive testing suite for synchronous cache implementations.

from langchain_tests.integration_tests import SyncCacheTestSuite

class SyncCacheTestSuite(BaseStandardTests):
    """Synchronous cache testing suite."""
    
    # Required fixture
    @pytest.fixture
    def cache(self):
        """Empty cache instance for testing. Must be implemented by test class."""
    
    # Helper methods for generating test data
    def get_sample_prompt(self) -> str:
        """Generate a sample prompt for cache testing."""
    
    def get_sample_llm_string(self) -> str:
        """Generate a sample LLM configuration string."""
    
    def get_sample_generation(self):
        """Generate a sample LLM generation object."""
    
    # Cache state tests
    def test_cache_is_empty(self) -> None:
        """Verify that the cache starts empty."""
    
    def test_cache_still_empty(self) -> None:
        """Verify that the cache is properly cleaned up after tests."""
    
    # Basic cache operations
    def test_update_cache(self) -> None:
        """Test adding entries to the cache."""
    
    def test_clear_cache(self) -> None:
        """Test clearing all cache entries."""
    
    # Cache hit/miss behavior
    def test_cache_miss(self) -> None:
        """Test cache miss behavior when entries don't exist."""
    
    def test_cache_hit(self) -> None:
        """Test cache hit behavior when entries exist."""
    
    # Multi-generation caching
    def test_update_cache_with_multiple_generations(self) -> None:
        """Test caching multiple generations for the same prompt."""

Usage Example

import pytest
from langchain_tests.integration_tests import SyncCacheTestSuite
from my_integration import MyCache

class TestMyCache(SyncCacheTestSuite):
    @pytest.fixture
    def cache(self):
        # Create a fresh cache instance for each test
        cache_instance = MyCache(
            connection_url="redis://localhost:6379/0",
            namespace="test_cache"
        )
        yield cache_instance
        # Cleanup after test
        cache_instance.clear()

Asynchronous Cache Testing

Comprehensive testing suite for asynchronous cache implementations.

from langchain_tests.integration_tests import AsyncCacheTestSuite

class AsyncCacheTestSuite(BaseStandardTests):
    """Asynchronous cache testing suite."""
    
    # Required fixture
    @pytest.fixture
    async def cache(self):
        """Empty async cache instance for testing. Must be implemented by test class."""
    
    # Helper methods (same as sync version)
    def get_sample_prompt(self) -> str:
        """Generate a sample prompt for cache testing."""
    
    def get_sample_llm_string(self) -> str:
        """Generate a sample LLM configuration string."""
    
    def get_sample_generation(self):
        """Generate a sample LLM generation object."""
    
    # Async cache state tests
    async def test_cache_is_empty(self) -> None:
        """Verify that the async cache starts empty."""
    
    async def test_cache_still_empty(self) -> None:
        """Verify that the async cache is properly cleaned up after tests."""
    
    # Async cache operations
    async def test_update_cache(self) -> None:
        """Test adding entries to the async cache."""
    
    async def test_clear_cache(self) -> None:
        """Test clearing all async cache entries."""
    
    # Async cache hit/miss behavior
    async def test_cache_miss(self) -> None:
        """Test async cache miss behavior when entries don't exist."""
    
    async def test_cache_hit(self) -> None:
        """Test async cache hit behavior when entries exist."""
    
    # Async multi-generation caching
    async def test_update_cache_with_multiple_generations(self) -> None:
        """Test async caching of multiple generations for the same prompt."""

Usage Example

import pytest
from langchain_tests.integration_tests import AsyncCacheTestSuite
from my_integration import MyAsyncCache

class TestMyAsyncCache(AsyncCacheTestSuite):
    @pytest.fixture
    async def cache(self):
        # Create a fresh async cache instance for each test
        cache_instance = await MyAsyncCache.create(
            connection_url="redis://localhost:6379/0",
            namespace="test_async_cache"
        )
        yield cache_instance
        # Cleanup after test
        await cache_instance.clear()
        await cache_instance.close()

Test Data Generation

The cache testing framework provides helper methods for generating consistent test data:

Sample Prompt Generator

def get_sample_prompt(self) -> str:
    """
    Generate a sample prompt for cache testing.
    
    Returns:
        str: A consistent prompt string for testing cache operations
    """

Sample LLM String Generator

def get_sample_llm_string(self) -> str:
    """
    Generate a sample LLM configuration string.
    
    Returns:
        str: A serialized LLM configuration string that represents
             the model settings for cache key generation
    """

Sample Generation Object

def get_sample_generation(self):
    """
    Generate a sample LLM generation object.
    
    Returns:
        Generation: A sample generation object containing text output
                   and metadata that would be cached
    """

Cache Key Generation

Cache implementations must handle proper key generation based on:

  • Prompt Content: The input text or messages
  • LLM Configuration: Model settings, temperature, max tokens, etc.
  • Additional Parameters: Stop sequences, presence penalty, etc.

Key Generation Pattern

def _generate_cache_key(self, prompt: str, llm_string: str) -> str:
    """Generate a unique cache key from prompt and LLM configuration."""
    combined = f"{prompt}:{llm_string}"
    return hashlib.sha256(combined.encode()).hexdigest()

Multi-Generation Support

LangChain caches can store multiple generations for the same prompt, which is essential for:

  • Temperature > 0: Different outputs for the same prompt
  • n > 1: Multiple completions requested in a single call
  • Batch Processing: Storing multiple variants

Multi-Generation Test Pattern

def test_update_cache_with_multiple_generations(self):
    """Test that cache can store multiple generations for same prompt."""
    prompt = self.get_sample_prompt()
    llm_string = self.get_sample_llm_string()
    
    # Create multiple generations
    generations = [
        self.get_sample_generation("First response"),
        self.get_sample_generation("Second response"),
        self.get_sample_generation("Third response")
    ]
    
    # Cache all generations
    self.cache.update(prompt, llm_string, generations)
    
    # Verify all generations are retrievable
    cached_generations = self.cache.lookup(prompt, llm_string)
    assert len(cached_generations) == 3

Cache Invalidation

Cache tests verify proper invalidation behavior:

def test_cache_invalidation_on_llm_change(self) -> None:
    """Test that cache misses when LLM configuration changes."""

def test_cache_invalidation_on_prompt_change(self) -> None:
    """Test that cache misses when prompt changes."""

Error Handling

Cache implementations should handle various error conditions gracefully:

def test_cache_connection_error(self) -> None:
    """Test behavior when cache backend is unavailable."""

def test_cache_serialization_error(self) -> None:
    """Test handling of objects that cannot be serialized."""

def test_cache_memory_pressure(self) -> None:
    """Test behavior under memory pressure conditions."""

Performance Testing

Cache tests include performance benchmarks:

def test_cache_write_performance(self) -> None:
    """Benchmark cache write operations."""

def test_cache_read_performance(self) -> None:
    """Benchmark cache read operations."""

def test_cache_bulk_operations(self) -> None:
    """Test performance with bulk cache operations."""

Cache Configuration

Tests verify that cache implementations respect configuration parameters:

TTL (Time To Live) Testing

def test_cache_ttl_expiration(self):
    """Test that cache entries expire after TTL."""
    
def test_cache_ttl_refresh(self):
    """Test that cache TTL is refreshed on access."""

Size Limits

def test_cache_size_limit(self):
    """Test that cache respects maximum size limits."""
    
def test_cache_eviction_policy(self):
    """Test cache eviction when size limit is reached."""

Thread Safety

For implementations that support concurrent access:

def test_cache_thread_safety(self) -> None:
    """Test cache behavior under concurrent access."""

def test_cache_atomic_operations(self) -> None:
    """Test that cache operations are atomic."""

Persistence Testing

For persistent cache implementations:

def test_cache_persistence(self) -> None:
    """Test that cache data survives restart."""

def test_cache_recovery(self) -> None:
    """Test cache recovery from corruption."""

Cleanup and Isolation

Proper test isolation is critical for cache testing:

@pytest.fixture
def cache(self):
    """Cache fixture with proper isolation."""
    namespace = f"test_{uuid.uuid4()}"
    cache_instance = MyCache(namespace=namespace)
    yield cache_instance
    # Ensure complete cleanup
    cache_instance.clear()
    cache_instance.close()

Integration with LangChain

Cache tests verify integration with LangChain's caching system:

def test_langchain_cache_integration(self) -> None:
    """Test integration with LangChain's global cache."""

def test_per_request_caching(self) -> None:
    """Test per-request cache behavior."""

The cache testing framework ensures that cache implementations provide reliable, performant caching for LLM responses while handling edge cases, errors, and concurrent access patterns appropriately.

Install with Tessl CLI

npx tessl i tessl/pypi-langchain-tests

docs

cache-testing.md

configuration.md

index.md

integration-tests.md

key-value-stores.md

unit-tests.md

vector-stores.md

tile.json