or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

cache-testing.mdconfiguration.mdindex.mdintegration-tests.mdkey-value-stores.mdunit-tests.mdvector-stores.md

configuration.mddocs/

0

# Configuration and Utilities

1

2

VCR (Video Cassette Recorder) integration for HTTP call recording/playback, pytest fixtures for test isolation, Pydantic utilities, and custom serialization components that provide the testing infrastructure for the langchain-tests framework.

3

4

## Capabilities

5

6

### VCR Configuration

7

8

VCR integration enables recording and replaying HTTP interactions for consistent, offline testing of API-dependent components.

9

10

```python { .api }

11

from langchain_tests.conftest import CustomSerializer, CustomPersister

12

13

class CustomSerializer:

14

"""Custom VCR cassette serializer using YAML and gzip compression."""

15

16

def serialize(self, cassette_dict: dict) -> bytes:

17

"""

18

Convert cassette dictionary to compressed YAML format.

19

20

Args:

21

cassette_dict: VCR cassette data structure

22

23

Returns:

24

bytes: Compressed YAML representation

25

"""

26

27

def deserialize(self, data: bytes) -> dict:

28

"""

29

Decompress and convert YAML data back to cassette dictionary.

30

31

Args:

32

data: Compressed YAML bytes

33

34

Returns:

35

dict: Reconstructed cassette data structure

36

"""

37

38

class CustomPersister:

39

"""Custom VCR persister using CustomSerializer for efficient storage."""

40

41

def load_cassette(self, cassette_path: str, serializer) -> dict:

42

"""

43

Load cassette from file using the custom serializer.

44

45

Args:

46

cassette_path: Path to cassette file

47

serializer: Serializer instance for data conversion

48

49

Returns:

50

dict: Loaded cassette data

51

"""

52

53

def save_cassette(self, cassette_path: str, cassette_dict: dict, serializer) -> None:

54

"""

55

Save cassette to file using the custom serializer.

56

57

Args:

58

cassette_path: Path where cassette should be saved

59

cassette_dict: Cassette data to save

60

serializer: Serializer instance for data conversion

61

"""

62

```

63

64

#### Usage Example

65

66

```python

67

import vcr

68

from langchain_tests.conftest import CustomSerializer, CustomPersister

69

70

# Configure VCR with custom serialization

71

my_vcr = vcr.VCR(

72

serializer_name='custom',

73

persister=CustomPersister(),

74

cassette_library_dir='tests/cassettes'

75

)

76

77

# Register the custom serializer

78

my_vcr.register_serializer('custom', CustomSerializer())

79

80

# Use in tests

81

@my_vcr.use_cassette('my_api_test.yaml')

82

def test_api_call():

83

# API calls will be recorded/replayed

84

response = my_api_client.get_data()

85

assert response.status_code == 200

86

```

87

88

### Pytest Fixtures

89

90

Global pytest fixtures for VCR configuration and test setup.

91

92

```python { .api }

93

import pytest

94

95

@pytest.fixture

96

def _base_vcr_config() -> dict:

97

"""

98

Base VCR configuration with default settings.

99

100

Returns:

101

dict: Base VCR configuration parameters

102

"""

103

104

@pytest.fixture

105

def vcr_config(_base_vcr_config: dict) -> dict:

106

"""

107

VCR configuration fixture that can be customized by test classes.

108

109

Args:

110

_base_vcr_config: Base configuration from _base_vcr_config fixture

111

112

Returns:

113

dict: Complete VCR configuration for test execution

114

"""

115

```

116

117

#### VCR Configuration Options

118

119

```python

120

# Default VCR configuration

121

_BASE_VCR_CONFIG = {

122

'serializer_name': 'custom',

123

'persister': CustomPersister(),

124

'decode_compressed_response': True,

125

'record_mode': 'once',

126

'match_on': ['method', 'scheme', 'host', 'port', 'path', 'query'],

127

'filter_headers': _BASE_FILTER_HEADERS,

128

'filter_query_parameters': ['api_key', 'access_token'],

129

'filter_post_data_parameters': ['password', 'secret']

130

}

131

```

132

133

### Header Filtering

134

135

Configuration for filtering sensitive headers from VCR cassettes.

136

137

```python { .api }

138

_BASE_FILTER_HEADERS = [

139

'authorization',

140

'x-api-key',

141

'x-auth-token',

142

'cookie',

143

'set-cookie',

144

'x-session-id',

145

'x-request-id'

146

]

147

```

148

149

#### Custom Header Filtering

150

151

```python

152

# In your test class

153

@pytest.fixture

154

def vcr_config(self, _base_vcr_config):

155

config = _base_vcr_config.copy()

156

config['filter_headers'] = [

157

*_BASE_FILTER_HEADERS,

158

'x-custom-auth',

159

'x-tenant-id'

160

]

161

return config

162

```

163

164

### Pydantic Utilities

165

166

Utilities for handling different Pydantic versions and compatibility.

167

168

```python { .api }

169

from langchain_tests.utils.pydantic import get_pydantic_major_version, PYDANTIC_MAJOR_VERSION

170

171

def get_pydantic_major_version() -> int:

172

"""

173

Detect the major version of Pydantic installed.

174

175

Returns:

176

int: Major version number (1 or 2)

177

"""

178

179

PYDANTIC_MAJOR_VERSION: int

180

"""Global constant containing the detected Pydantic major version."""

181

```

182

183

#### Pydantic Version Compatibility

184

185

```python

186

from langchain_tests.utils.pydantic import PYDANTIC_MAJOR_VERSION

187

188

if PYDANTIC_MAJOR_VERSION == 1:

189

# Use Pydantic v1 API

190

from pydantic import BaseModel

191

192

class MyModel(BaseModel):

193

name: str

194

195

class Config:

196

extra = "forbid"

197

198

else:

199

# Use Pydantic v2 API

200

from pydantic import BaseModel, ConfigDict

201

202

class MyModel(BaseModel):

203

model_config = ConfigDict(extra="forbid")

204

name: str

205

```

206

207

### Test Model Generation

208

209

Utilities for generating test Pydantic models for structured output testing.

210

211

```python { .api }

212

from langchain_tests.unit_tests.chat_models import (

213

generate_schema_pydantic,

214

generate_schema_pydantic_v1_from_2,

215

TEST_PYDANTIC_MODELS

216

)

217

218

def generate_schema_pydantic():

219

"""

220

Generate a Pydantic model for testing structured output.

221

222

Returns:

223

BaseModel: A test Pydantic model with various field types

224

"""

225

226

def generate_schema_pydantic_v1_from_2():

227

"""

228

Generate a Pydantic V1 model from V2 for compatibility testing.

229

230

Returns:

231

BaseModel: A Pydantic V1 compatible model

232

"""

233

234

TEST_PYDANTIC_MODELS: List

235

"""List of pre-defined Pydantic models for comprehensive testing."""

236

```

237

238

#### Test Model Examples

239

240

```python

241

# Generated test models include various field types

242

class GeneratedTestModel(BaseModel):

243

# Basic types

244

name: str

245

age: int

246

score: float

247

active: bool

248

249

# Optional fields

250

description: Optional[str] = None

251

252

# Collections

253

tags: List[str]

254

metadata: Dict[str, Any]

255

256

# Nested models

257

address: Address

258

259

# Enums

260

status: StatusEnum

261

262

# Date/time fields

263

created_at: datetime

264

updated_at: Optional[datetime] = None

265

```

266

267

### Test Constants

268

269

Pre-defined constants used throughout the testing framework.

270

271

```python { .api }

272

EMBEDDING_SIZE = 6

273

"""Standard embedding dimension for vector store tests."""

274

275

_BASE_FILTER_HEADERS = [

276

'authorization',

277

'x-api-key',

278

'x-auth-token',

279

'cookie',

280

'set-cookie'

281

]

282

"""Default headers to filter from VCR cassettes."""

283

```

284

285

### Environment Variable Management

286

287

Utilities for managing test environment variables and configuration.

288

289

```python { .api }

290

def get_test_env_var(var_name: str, default: str = None) -> str:

291

"""

292

Get environment variable with test-specific prefix.

293

294

Args:

295

var_name: Base variable name

296

default: Default value if not found

297

298

Returns:

299

str: Environment variable value

300

"""

301

302

def set_test_env_vars(env_vars: Dict[str, str]) -> None:

303

"""

304

Set multiple test environment variables.

305

306

Args:

307

env_vars: Dictionary of variable names and values

308

"""

309

```

310

311

#### Environment Variable Patterns

312

313

```python

314

# Test environment variable naming

315

TEST_API_KEY = get_test_env_var('API_KEY')

316

TEST_MODEL_NAME = get_test_env_var('MODEL_NAME', 'test-model')

317

TEST_BASE_URL = get_test_env_var('BASE_URL', 'https://api.test.com')

318

319

# Setting test environment

320

set_test_env_vars({

321

'LANGCHAIN_TEST_API_KEY': 'test-key-123',

322

'LANGCHAIN_TEST_MODEL': 'gpt-3.5-turbo',

323

'LANGCHAIN_TEST_TIMEOUT': '30'

324

})

325

```

326

327

### Fixture Utilities

328

329

Helper utilities for creating and managing pytest fixtures.

330

331

```python { .api }

332

def create_model_fixture(model_class, params: dict):

333

"""

334

Create a model fixture factory.

335

336

Args:

337

model_class: The model class to instantiate

338

params: Parameters for model initialization

339

340

Returns:

341

callable: Pytest fixture function

342

"""

343

344

def create_temp_directory_fixture(prefix: str = 'langchain_test_'):

345

"""

346

Create a temporary directory fixture.

347

348

Args:

349

prefix: Prefix for temporary directory name

350

351

Returns:

352

callable: Pytest fixture function that yields temp directory path

353

"""

354

```

355

356

### Test Data Factories

357

358

Factories for generating consistent test data across different test suites.

359

360

```python { .api }

361

class TestDataFactory:

362

"""Factory for generating consistent test data."""

363

364

@staticmethod

365

def create_sample_documents(count: int = 3) -> List[Document]:

366

"""Create sample documents for testing."""

367

368

@staticmethod

369

def create_sample_messages(count: int = 2) -> List[BaseMessage]:

370

"""Create sample messages for chat testing."""

371

372

@staticmethod

373

def create_sample_tools(count: int = 2) -> List[BaseTool]:

374

"""Create sample tools for tool testing."""

375

376

@staticmethod

377

def create_sample_embeddings(dimension: int = 6) -> List[List[float]]:

378

"""Create deterministic sample embeddings."""

379

```

380

381

### Test Data Generation

382

383

Utilities for generating consistent test data and managing test state.

384

385

```python { .api }

386

def get_test_documents(count: int = 3) -> List[Document]:

387

"""

388

Generate standard test documents for testing.

389

390

Args:

391

count: Number of test documents to generate

392

393

Returns:

394

List[Document]: Generated test documents with metadata

395

"""

396

397

def get_test_messages(count: int = 2) -> List[BaseMessage]:

398

"""

399

Generate standard test messages for chat testing.

400

401

Args:

402

count: Number of test messages to generate

403

404

Returns:

405

List[BaseMessage]: Generated test messages

406

"""

407

```

408

409

### Async Test Utilities

410

411

Utilities for async test execution and management.

412

413

```python { .api }

414

import asyncio

415

416

async def run_async_test_with_timeout(coro, timeout: float = 30.0):

417

"""

418

Run async test with timeout.

419

420

Args:

421

coro: Coroutine to execute

422

timeout: Timeout in seconds

423

424

Returns:

425

Any: Result of coroutine execution

426

"""

427

428

def async_test_fixture(async_func):

429

"""

430

Decorator to convert async function to sync fixture.

431

432

Args:

433

async_func: Async function to wrap

434

435

Returns:

436

callable: Sync fixture function

437

"""

438

```

439

440

### Performance Monitoring

441

442

Utilities for monitoring test performance and resource usage.

443

444

```python { .api }

445

class PerformanceMonitor:

446

"""Monitor performance metrics during test execution."""

447

448

def __init__(self):

449

self.metrics = {}

450

451

def start_timing(self, operation: str) -> None:

452

"""Start timing an operation."""

453

454

def end_timing(self, operation: str) -> float:

455

"""End timing and return duration."""

456

457

def record_memory_usage(self, label: str) -> None:

458

"""Record current memory usage."""

459

460

def get_metrics(self) -> Dict[str, Any]:

461

"""Get all recorded metrics."""

462

```

463

464

The configuration and utilities module provides the essential infrastructure for reliable, consistent testing across all LangChain integration test suites, including HTTP recording/replay, environment management, data generation, and performance monitoring capabilities.