0
# Settings and Configuration
1
2
Global configuration system for managing LLMs, embedding models, and processing parameters across the LlamaIndex.TS framework.
3
4
## Import
5
6
```typescript
7
import { Settings } from "llamaindex";
8
```
9
10
## Overview
11
12
The Settings object provides centralized configuration management for all LlamaIndex.TS components. It uses a singleton pattern to ensure consistent settings across your application and provides temporary context switching capabilities.
13
14
## Settings Object
15
16
```typescript { .api }
17
interface GlobalSettings {
18
llm: LLM;
19
embedModel: BaseEmbedding;
20
nodeParser: NodeParser;
21
promptHelper: PromptHelper;
22
callbackManager: CallbackManager;
23
chunkSize: number | undefined;
24
chunkOverlap: number | undefined;
25
prompt: PromptConfig;
26
debug: boolean;
27
28
withLLM<Result>(llm: LLM, fn: () => Result): Result;
29
withEmbedModel<Result>(embedModel: BaseEmbedding, fn: () => Result): Result;
30
withNodeParser<Result>(nodeParser: NodeParser, fn: () => Result): Result;
31
withPromptHelper<Result>(promptHelper: PromptHelper, fn: () => Result): Result;
32
withCallbackManager<Result>(callbackManager: CallbackManager, fn: () => Result): Result;
33
withChunkSize<Result>(chunkSize: number, fn: () => Result): Result;
34
withChunkOverlap<Result>(chunkOverlap: number, fn: () => Result): Result;
35
withPrompt<Result>(prompt: PromptConfig, fn: () => Result): Result;
36
}
37
```
38
39
## Configuration Types
40
41
```typescript { .api }
42
interface Config {
43
prompt: PromptConfig;
44
promptHelper: PromptHelper | null;
45
embedModel: BaseEmbedding | null;
46
nodeParser: NodeParser | null;
47
callbackManager: CallbackManager | null;
48
chunkSize: number | undefined;
49
chunkOverlap: number | undefined;
50
}
51
52
interface PromptConfig {
53
llm?: string;
54
lang?: string;
55
}
56
```
57
58
## Basic Usage
59
60
### Global Configuration
61
62
```typescript
63
import { Settings, OpenAI, OpenAIEmbedding } from "llamaindex";
64
65
// Set global LLM
66
Settings.llm = new OpenAI({
67
model: "gpt-4",
68
temperature: 0.1,
69
});
70
71
// Set global embedding model
72
Settings.embedModel = new OpenAIEmbedding({
73
model: "text-embedding-ada-002",
74
});
75
76
// Set chunk size for text splitting
77
Settings.chunkSize = 1024;
78
Settings.chunkOverlap = 20;
79
80
// Enable debug mode
81
Settings.debug = true;
82
```
83
84
### Temporary Context Switching
85
86
Use the `with*` methods to temporarily override settings for specific operations:
87
88
```typescript
89
import { Settings, OpenAI, Document, VectorStoreIndex } from "llamaindex";
90
91
const documents = [new Document({ text: "Example content" })];
92
93
// Use different LLM for this specific operation
94
const result = Settings.withLLM(
95
new OpenAI({ model: "gpt-3.5-turbo" }),
96
() => {
97
return VectorStoreIndex.fromDocuments(documents);
98
}
99
);
100
101
// Settings.llm reverts to original value after the function completes
102
```
103
104
### Multiple Settings Override
105
106
```typescript
107
// Override multiple settings temporarily
108
Settings.withChunkSize(512, () => {
109
Settings.withChunkOverlap(50, () => {
110
// Process with different chunking parameters
111
const index = VectorStoreIndex.fromDocuments(documents);
112
return index;
113
});
114
});
115
```
116
117
## Properties
118
119
### Core Components
120
121
- **`llm`**: The language model used for text generation and completion
122
- **`embedModel`**: The embedding model used for generating vector representations
123
- **`nodeParser`**: The text splitter used for chunking documents
124
- **`promptHelper`**: Helper for managing prompt templates and formatting
125
- **`callbackManager`**: Manager for handling events and callbacks
126
127
### Processing Parameters
128
129
- **`chunkSize`**: Default size for text chunks (undefined uses model default)
130
- **`chunkOverlap`**: Overlap between text chunks for better continuity
131
- **`prompt`**: Prompt configuration including language and LLM preferences
132
- **`debug`**: Enable debug logging and verbose output
133
134
## Context Methods
135
136
All `with*` methods follow the same pattern - they temporarily override a setting for the duration of the provided function:
137
138
### withLLM(llm, fn)
139
Temporarily use a different language model.
140
141
**Parameters:**
142
- `llm: LLM` - The LLM to use temporarily
143
- `fn: () => Result` - Function to execute with the temporary LLM
144
145
**Returns:** The result of executing `fn`
146
147
### withEmbedModel(embedModel, fn)
148
Temporarily use a different embedding model.
149
150
**Parameters:**
151
- `embedModel: BaseEmbedding` - The embedding model to use temporarily
152
- `fn: () => Result` - Function to execute with the temporary embedding model
153
154
**Returns:** The result of executing `fn`
155
156
### withChunkSize(chunkSize, fn)
157
Temporarily use a different chunk size.
158
159
**Parameters:**
160
- `chunkSize: number` - The chunk size to use temporarily
161
- `fn: () => Result` - Function to execute with the temporary chunk size
162
163
**Returns:** The result of executing `fn`
164
165
## Best Practices
166
167
### Initialize Early
168
Set up your global settings at the start of your application:
169
170
```typescript
171
// At app initialization
172
Settings.llm = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
173
Settings.embedModel = new OpenAIEmbedding();
174
Settings.chunkSize = 1024;
175
Settings.chunkOverlap = 20;
176
```
177
178
### Use Context Methods for Variations
179
Use temporary context switching when you need different settings for specific operations:
180
181
```typescript
182
// Different chunk size for processing large documents
183
const processLargeDoc = (doc: Document) => {
184
return Settings.withChunkSize(2048, () => {
185
return VectorStoreIndex.fromDocuments([doc]);
186
});
187
};
188
189
// Different LLM for specific queries
190
const getCheapAnswer = (query: string, index: VectorStoreIndex) => {
191
return Settings.withLLM(new OpenAI({ model: "gpt-3.5-turbo" }), () => {
192
const queryEngine = index.asQueryEngine();
193
return queryEngine.query(query);
194
});
195
};
196
```
197
198
### Environment-Specific Configuration
199
Configure settings based on your environment:
200
201
```typescript
202
if (process.env.NODE_ENV === "development") {
203
Settings.debug = true;
204
Settings.llm = new OpenAI({ model: "gpt-3.5-turbo" }); // Cheaper for dev
205
} else {
206
Settings.debug = false;
207
Settings.llm = new OpenAI({ model: "gpt-4" }); // Better for prod
208
}
209
```