or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

cache-testing.mdconfiguration.mdindex.mdintegration-tests.mdkey-value-stores.mdunit-tests.mdvector-stores.md

cache-testing.mddocs/

0

# Cache Testing

1

2

Testing suites for LangChain cache implementations with both synchronous and asynchronous support. Cache tests cover cache hits, misses, updates, clearing operations, and multi-generation caching patterns essential for LLM response caching.

3

4

## Capabilities

5

6

### Synchronous Cache Testing

7

8

Comprehensive testing suite for synchronous cache implementations.

9

10

```python { .api }

11

from langchain_tests.integration_tests import SyncCacheTestSuite

12

13

class SyncCacheTestSuite(BaseStandardTests):

14

"""Synchronous cache testing suite."""

15

16

# Required fixture

17

@pytest.fixture

18

def cache(self):

19

"""Empty cache instance for testing. Must be implemented by test class."""

20

21

# Helper methods for generating test data

22

def get_sample_prompt(self) -> str:

23

"""Generate a sample prompt for cache testing."""

24

25

def get_sample_llm_string(self) -> str:

26

"""Generate a sample LLM configuration string."""

27

28

def get_sample_generation(self):

29

"""Generate a sample LLM generation object."""

30

31

# Cache state tests

32

def test_cache_is_empty(self) -> None:

33

"""Verify that the cache starts empty."""

34

35

def test_cache_still_empty(self) -> None:

36

"""Verify that the cache is properly cleaned up after tests."""

37

38

# Basic cache operations

39

def test_update_cache(self) -> None:

40

"""Test adding entries to the cache."""

41

42

def test_clear_cache(self) -> None:

43

"""Test clearing all cache entries."""

44

45

# Cache hit/miss behavior

46

def test_cache_miss(self) -> None:

47

"""Test cache miss behavior when entries don't exist."""

48

49

def test_cache_hit(self) -> None:

50

"""Test cache hit behavior when entries exist."""

51

52

# Multi-generation caching

53

def test_update_cache_with_multiple_generations(self) -> None:

54

"""Test caching multiple generations for the same prompt."""

55

```

56

57

#### Usage Example

58

59

```python

60

import pytest

61

from langchain_tests.integration_tests import SyncCacheTestSuite

62

from my_integration import MyCache

63

64

class TestMyCache(SyncCacheTestSuite):

65

@pytest.fixture

66

def cache(self):

67

# Create a fresh cache instance for each test

68

cache_instance = MyCache(

69

connection_url="redis://localhost:6379/0",

70

namespace="test_cache"

71

)

72

yield cache_instance

73

# Cleanup after test

74

cache_instance.clear()

75

```

76

77

### Asynchronous Cache Testing

78

79

Comprehensive testing suite for asynchronous cache implementations.

80

81

```python { .api }

82

from langchain_tests.integration_tests import AsyncCacheTestSuite

83

84

class AsyncCacheTestSuite(BaseStandardTests):

85

"""Asynchronous cache testing suite."""

86

87

# Required fixture

88

@pytest.fixture

89

async def cache(self):

90

"""Empty async cache instance for testing. Must be implemented by test class."""

91

92

# Helper methods (same as sync version)

93

def get_sample_prompt(self) -> str:

94

"""Generate a sample prompt for cache testing."""

95

96

def get_sample_llm_string(self) -> str:

97

"""Generate a sample LLM configuration string."""

98

99

def get_sample_generation(self):

100

"""Generate a sample LLM generation object."""

101

102

# Async cache state tests

103

async def test_cache_is_empty(self) -> None:

104

"""Verify that the async cache starts empty."""

105

106

async def test_cache_still_empty(self) -> None:

107

"""Verify that the async cache is properly cleaned up after tests."""

108

109

# Async cache operations

110

async def test_update_cache(self) -> None:

111

"""Test adding entries to the async cache."""

112

113

async def test_clear_cache(self) -> None:

114

"""Test clearing all async cache entries."""

115

116

# Async cache hit/miss behavior

117

async def test_cache_miss(self) -> None:

118

"""Test async cache miss behavior when entries don't exist."""

119

120

async def test_cache_hit(self) -> None:

121

"""Test async cache hit behavior when entries exist."""

122

123

# Async multi-generation caching

124

async def test_update_cache_with_multiple_generations(self) -> None:

125

"""Test async caching of multiple generations for the same prompt."""

126

```

127

128

#### Usage Example

129

130

```python

131

import pytest

132

from langchain_tests.integration_tests import AsyncCacheTestSuite

133

from my_integration import MyAsyncCache

134

135

class TestMyAsyncCache(AsyncCacheTestSuite):

136

@pytest.fixture

137

async def cache(self):

138

# Create a fresh async cache instance for each test

139

cache_instance = await MyAsyncCache.create(

140

connection_url="redis://localhost:6379/0",

141

namespace="test_async_cache"

142

)

143

yield cache_instance

144

# Cleanup after test

145

await cache_instance.clear()

146

await cache_instance.close()

147

```

148

149

## Test Data Generation

150

151

The cache testing framework provides helper methods for generating consistent test data:

152

153

### Sample Prompt Generator

154

155

```python { .api }

156

def get_sample_prompt(self) -> str:

157

"""

158

Generate a sample prompt for cache testing.

159

160

Returns:

161

str: A consistent prompt string for testing cache operations

162

"""

163

```

164

165

### Sample LLM String Generator

166

167

```python { .api }

168

def get_sample_llm_string(self) -> str:

169

"""

170

Generate a sample LLM configuration string.

171

172

Returns:

173

str: A serialized LLM configuration string that represents

174

the model settings for cache key generation

175

"""

176

```

177

178

### Sample Generation Object

179

180

```python { .api }

181

def get_sample_generation(self):

182

"""

183

Generate a sample LLM generation object.

184

185

Returns:

186

Generation: A sample generation object containing text output

187

and metadata that would be cached

188

"""

189

```

190

191

## Cache Key Generation

192

193

Cache implementations must handle proper key generation based on:

194

195

- **Prompt Content**: The input text or messages

196

- **LLM Configuration**: Model settings, temperature, max tokens, etc.

197

- **Additional Parameters**: Stop sequences, presence penalty, etc.

198

199

### Key Generation Pattern

200

201

```python

202

def _generate_cache_key(self, prompt: str, llm_string: str) -> str:

203

"""Generate a unique cache key from prompt and LLM configuration."""

204

combined = f"{prompt}:{llm_string}"

205

return hashlib.sha256(combined.encode()).hexdigest()

206

```

207

208

## Multi-Generation Support

209

210

LangChain caches can store multiple generations for the same prompt, which is essential for:

211

212

- **Temperature > 0**: Different outputs for the same prompt

213

- **n > 1**: Multiple completions requested in a single call

214

- **Batch Processing**: Storing multiple variants

215

216

### Multi-Generation Test Pattern

217

218

```python

219

def test_update_cache_with_multiple_generations(self):

220

"""Test that cache can store multiple generations for same prompt."""

221

prompt = self.get_sample_prompt()

222

llm_string = self.get_sample_llm_string()

223

224

# Create multiple generations

225

generations = [

226

self.get_sample_generation("First response"),

227

self.get_sample_generation("Second response"),

228

self.get_sample_generation("Third response")

229

]

230

231

# Cache all generations

232

self.cache.update(prompt, llm_string, generations)

233

234

# Verify all generations are retrievable

235

cached_generations = self.cache.lookup(prompt, llm_string)

236

assert len(cached_generations) == 3

237

```

238

239

## Cache Invalidation

240

241

Cache tests verify proper invalidation behavior:

242

243

```python { .api }

244

def test_cache_invalidation_on_llm_change(self) -> None:

245

"""Test that cache misses when LLM configuration changes."""

246

247

def test_cache_invalidation_on_prompt_change(self) -> None:

248

"""Test that cache misses when prompt changes."""

249

```

250

251

## Error Handling

252

253

Cache implementations should handle various error conditions gracefully:

254

255

```python { .api }

256

def test_cache_connection_error(self) -> None:

257

"""Test behavior when cache backend is unavailable."""

258

259

def test_cache_serialization_error(self) -> None:

260

"""Test handling of objects that cannot be serialized."""

261

262

def test_cache_memory_pressure(self) -> None:

263

"""Test behavior under memory pressure conditions."""

264

```

265

266

## Performance Testing

267

268

Cache tests include performance benchmarks:

269

270

```python { .api }

271

def test_cache_write_performance(self) -> None:

272

"""Benchmark cache write operations."""

273

274

def test_cache_read_performance(self) -> None:

275

"""Benchmark cache read operations."""

276

277

def test_cache_bulk_operations(self) -> None:

278

"""Test performance with bulk cache operations."""

279

```

280

281

## Cache Configuration

282

283

Tests verify that cache implementations respect configuration parameters:

284

285

### TTL (Time To Live) Testing

286

287

```python

288

def test_cache_ttl_expiration(self):

289

"""Test that cache entries expire after TTL."""

290

291

def test_cache_ttl_refresh(self):

292

"""Test that cache TTL is refreshed on access."""

293

```

294

295

### Size Limits

296

297

```python

298

def test_cache_size_limit(self):

299

"""Test that cache respects maximum size limits."""

300

301

def test_cache_eviction_policy(self):

302

"""Test cache eviction when size limit is reached."""

303

```

304

305

## Thread Safety

306

307

For implementations that support concurrent access:

308

309

```python { .api }

310

def test_cache_thread_safety(self) -> None:

311

"""Test cache behavior under concurrent access."""

312

313

def test_cache_atomic_operations(self) -> None:

314

"""Test that cache operations are atomic."""

315

```

316

317

## Persistence Testing

318

319

For persistent cache implementations:

320

321

```python { .api }

322

def test_cache_persistence(self) -> None:

323

"""Test that cache data survives restart."""

324

325

def test_cache_recovery(self) -> None:

326

"""Test cache recovery from corruption."""

327

```

328

329

## Cleanup and Isolation

330

331

Proper test isolation is critical for cache testing:

332

333

```python

334

@pytest.fixture

335

def cache(self):

336

"""Cache fixture with proper isolation."""

337

namespace = f"test_{uuid.uuid4()}"

338

cache_instance = MyCache(namespace=namespace)

339

yield cache_instance

340

# Ensure complete cleanup

341

cache_instance.clear()

342

cache_instance.close()

343

```

344

345

## Integration with LangChain

346

347

Cache tests verify integration with LangChain's caching system:

348

349

```python { .api }

350

def test_langchain_cache_integration(self) -> None:

351

"""Test integration with LangChain's global cache."""

352

353

def test_per_request_caching(self) -> None:

354

"""Test per-request cache behavior."""

355

```

356

357

The cache testing framework ensures that cache implementations provide reliable, performant caching for LLM responses while handling edge cases, errors, and concurrent access patterns appropriately.