or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

tessl/npm-llamaindex

Data framework for your LLM application

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
npmpkg:npm/llamaindex@0.11.x

To install, run

npx @tessl/cli install tessl/npm-llamaindex@0.11.0

0

# LlamaIndex.TS

1

2

LlamaIndex.TS is a comprehensive TypeScript/JavaScript data framework that enables developers to integrate large language models (LLMs) with their own data. It provides a lightweight, easy-to-use set of tools for building LLM applications that can process and query custom data sources, supporting multiple JavaScript runtimes including Node.js, Deno, Bun, and edge environments.

3

4

## Package Information

5

6

- **Package Name**: llamaindex

7

- **Package Type**: npm

8

- **Language**: TypeScript

9

- **Installation**: `npm install llamaindex`

10

- **Minimum Node.js**: 18.0.0

11

- **Runtime Support**: Node.js, Deno, Bun, Vercel Edge Runtime, Cloudflare Workers

12

13

## Core Imports

14

15

```typescript

16

import { Settings, VectorStoreIndex, Document } from "llamaindex";

17

```

18

19

For submodule imports:

20

```typescript

21

import { ReACTAgent } from "llamaindex/agent";

22

import { RetrieverQueryEngine } from "llamaindex/engines";

23

import { SimpleVectorStore } from "llamaindex/vector-store";

24

```

25

26

For CommonJS:

27

```javascript

28

const { Settings, VectorStoreIndex, Document } = require("llamaindex");

29

```

30

31

## Basic Usage

32

33

```typescript

34

import { Settings, VectorStoreIndex, Document } from "llamaindex";

35

36

// Configure global settings

37

Settings.llm = "your-llm-instance";

38

Settings.embedModel = "your-embedding-model";

39

40

// Create documents and build index

41

const documents = [

42

new Document({ text: "This is a document about AI.", id_: "doc1" }),

43

new Document({ text: "LlamaIndex helps build LLM apps.", id_: "doc2" }),

44

];

45

46

// Build vector index

47

const index = await VectorStoreIndex.fromDocuments(documents);

48

49

// Query the index

50

const queryEngine = index.asQueryEngine();

51

const response = await queryEngine.query("Tell me about AI");

52

console.log(response.toString());

53

```

54

55

## Architecture

56

57

LlamaIndex.TS is built around several key architectural components:

58

59

- **Settings System**: Global configuration for LLMs, embeddings, and processing parameters

60

- **Document Processing**: Node parsers and text splitters for data ingestion

61

- **Indexing System**: Vector stores, keyword indices, and summary indices for data organization

62

- **Query Engines**: Retrieval and synthesis systems for answering questions

63

- **Agent Framework**: ReAct agents for complex reasoning and tool usage

64

- **Memory System**: Chat memory, vector memory, and context management

65

- **Storage Backend**: Document stores, index stores, and vector stores for persistence

66

67

## Capabilities

68

69

### Settings and Configuration

70

71

Global configuration system for managing LLMs, embedding models, and processing parameters across the framework.

72

73

```typescript { .api }

74

interface GlobalSettings extends Config {

75

llm: LLM;

76

embedModel: BaseEmbedding;

77

nodeParser: NodeParser;

78

promptHelper: PromptHelper;

79

callbackManager: CallbackManager;

80

chunkSize: number | undefined;

81

chunkOverlap: number | undefined;

82

prompt: PromptConfig;

83

debug: boolean;

84

85

withLLM<Result>(llm: LLM, fn: () => Result): Result;

86

withEmbedModel<Result>(embedModel: BaseEmbedding, fn: () => Result): Result;

87

withNodeParser<Result>(nodeParser: NodeParser, fn: () => Result): Result;

88

withPromptHelper<Result>(promptHelper: PromptHelper, fn: () => Result): Result;

89

withCallbackManager<Result>(callbackManager: CallbackManager, fn: () => Result): Result;

90

withChunkSize<Result>(chunkSize: number, fn: () => Result): Result;

91

withChunkOverlap<Result>(chunkOverlap: number, fn: () => Result): Result;

92

withPrompt<Result>(prompt: PromptConfig, fn: () => Result): Result;

93

}

94

95

const Settings: GlobalSettings;

96

```

97

98

[Settings and Configuration](./settings.md)

99

100

### Document Processing and Node Parsing

101

102

Text processing and document chunking functionality for preparing data for indexing and retrieval.

103

104

```typescript { .api }

105

class SentenceSplitter implements NodeParser {

106

constructor(options?: { chunkSize?: number; chunkOverlap?: number });

107

getNodesFromDocuments(documents: Document[]): TextNode[];

108

}

109

110

class Document {

111

constructor(init: { text: string; id_?: string; metadata?: Record<string, any> });

112

text: string;

113

id_: string;

114

metadata: Record<string, any>;

115

}

116

```

117

118

[Document Processing](./document-processing.md)

119

120

### Vector Indexing and Storage

121

122

Core indexing functionality for creating searchable representations of documents using vector embeddings.

123

124

```typescript { .api }

125

class VectorStoreIndex {

126

static fromDocuments(documents: Document[]): Promise<VectorStoreIndex>;

127

asQueryEngine(): QueryEngine;

128

asRetriever(): BaseRetriever;

129

insert(document: Document): Promise<void>;

130

}

131

132

interface BaseVectorStore {

133

add(nodes: BaseNode[]): Promise<string[]>;

134

query(query: VectorStoreQuery): Promise<VectorStoreQueryResult>;

135

}

136

```

137

138

[Vector Indexing](./vector-indexing.md)

139

140

### Query Engines

141

142

Query processing and response synthesis for retrieving and generating answers from indexed data.

143

144

```typescript { .api }

145

interface BaseQueryEngine {

146

query(query: string): Promise<EngineResponse>;

147

}

148

149

class RetrieverQueryEngine implements BaseQueryEngine {

150

constructor(retriever: BaseRetriever, responseSynthesizer?: ResponseSynthesizer);

151

query(query: string): Promise<EngineResponse>;

152

}

153

```

154

155

[Query Engines](./query-engines.md)

156

157

### Chat Engines

158

159

Conversational interfaces that maintain context and enable back-and-forth interactions with your data.

160

161

```typescript { .api }

162

interface BaseChatEngine {

163

chat(message: string): Promise<EngineResponse>;

164

reset(): void;

165

}

166

167

class ContextChatEngine implements BaseChatEngine {

168

constructor(options: { retriever: BaseRetriever; memory?: BaseMemory });

169

chat(message: string): Promise<EngineResponse>;

170

}

171

```

172

173

[Chat Engines](./chat-engines.md)

174

175

### Agent Framework

176

177

ReAct agents and task execution system for complex reasoning and multi-step operations with tool usage.

178

179

```typescript { .api }

180

class ReActAgent {

181

constructor(params: ReACTAgentParams);

182

chat(message: string): Promise<AgentChatResponse>;

183

createTask(input: string): Task;

184

}

185

186

interface AgentParamsBase {

187

tools: BaseTool[];

188

llm: LLM;

189

memory?: BaseMemory;

190

}

191

```

192

193

[Agent Framework](./agents.md)

194

195

### LLM Integration

196

197

Comprehensive integration with large language models including OpenAI, Anthropic, and other providers.

198

199

```typescript { .api }

200

interface LLM {

201

chat(messages: ChatMessage[]): Promise<ChatResponse>;

202

complete(prompt: string): Promise<CompletionResponse>;

203

metadata: LLMMetadata;

204

}

205

206

interface ChatMessage {

207

role: MessageType;

208

content: MessageContent;

209

}

210

211

type MessageContent = string | MessageContentDetail[];

212

```

213

214

[LLM Integration](./llm-integration.md)

215

216

### Embeddings

217

218

Text embedding generation and similarity operations for semantic search and retrieval.

219

220

```typescript { .api }

221

interface BaseEmbedding {

222

getTextEmbedding(text: string): Promise<number[]>;

223

getQueryEmbedding(query: string): Promise<number[]>;

224

}

225

226

function similarity(embedding1: number[], embedding2: number[]): number;

227

function getTopKEmbeddings(

228

queryEmbedding: number[],

229

embeddings: number[][],

230

k: number

231

): number[];

232

```

233

234

[Embeddings](./embeddings.md)

235

236

### Storage System

237

238

Persistent storage backends for documents, indices, and vector data across different storage systems.

239

240

```typescript { .api }

241

interface BaseDocumentStore {

242

addDocuments(documents: Document[]): Promise<void>;

243

getDocument(docId: string): Promise<Document | undefined>;

244

getAllDocuments(): Promise<Document[]>;

245

}

246

247

class StorageContext {

248

static fromDefaults(options?: {

249

docStore?: BaseDocumentStore;

250

indexStore?: BaseIndexStore;

251

vectorStore?: BaseVectorStore;

252

}): StorageContext;

253

}

254

```

255

256

[Storage System](./storage.md)

257

258

### Tools and Utilities

259

260

Tool integration system for extending agent capabilities with external functions and APIs.

261

262

```typescript { .api }

263

interface BaseTool {

264

metadata: ToolMetadata;

265

call(input: string): Promise<ToolOutput>;

266

}

267

268

class QueryEngineTool implements BaseTool {

269

constructor(queryEngine: BaseQueryEngine, metadata: ToolMetadata);

270

call(input: string): Promise<ToolOutput>;

271

}

272

```

273

274

[Tools and Utilities](./tools.md)

275

276

### Response Synthesis

277

278

Response generation and synthesis strategies for combining retrieved information into coherent answers.

279

280

```typescript { .api }

281

interface BaseSynthesizer {

282

synthesize(query: string, nodes: BaseNode[]): Promise<EngineResponse>;

283

}

284

285

type ResponseMode = "refine" | "compact" | "tree_summarize" | "simple_summarize";

286

287

function createResponseSynthesizer(mode: ResponseMode): BaseSynthesizer;

288

```

289

290

[Response Synthesis](./response-synthesis.md)

291

292

### Ingestion Pipelines

293

294

Advanced document ingestion system with support for transformation pipelines, duplicate detection, and batch processing.

295

296

```typescript { .api }

297

class IngestionPipeline {

298

constructor(options: {

299

transformations?: TransformComponent[];

300

readers?: Record<string, BaseReader>;

301

vectorStore?: BaseVectorStore;

302

cache?: IngestionCache;

303

docStore?: BaseDocumentStore;

304

duplicatesStrategy?: DuplicatesStrategy;

305

});

306

307

async run(

308

documents?: Document[],

309

inputDir?: string,

310

cacheCollection?: string

311

): Promise<BaseNode[]>;

312

313

loadDataFromDirectory(inputDir: string, inputFiles?: string[]): Promise<Document[]>;

314

315

// Properties

316

transformations: TransformComponent[];

317

readers: Record<string, BaseReader>;

318

vectorStore?: BaseVectorStore;

319

cache?: IngestionCache;

320

docStore?: BaseDocumentStore;

321

duplicatesStrategy: DuplicatesStrategy;

322

}

323

324

enum DuplicatesStrategy {

325

DUPLICATES_ONLY = "duplicates_only",

326

UPSERTS = "upserts",

327

UPSERTS_AND_DELETE = "upserts_and_delete",

328

}

329

330

interface TransformComponent {

331

transform(nodes: BaseNode[], options?: Record<string, any>): Promise<BaseNode[]>;

332

}

333

```

334

335

[Ingestion Pipelines](./ingestion.md)

336

337

### Evaluation Framework

338

339

Comprehensive evaluation system for assessing LLM application performance with built-in metrics and custom evaluators.

340

341

```typescript { .api }

342

interface BaseEvaluator {

343

evaluate(query: string, response: string, contexts?: string[]): Promise<EvaluationResult>;

344

}

345

346

interface EvaluationResult {

347

query: string;

348

response: string;

349

score: number;

350

feedback: string;

351

passingGrade?: boolean;

352

metadata?: Record<string, any>;

353

}

354

355

class FaithfulnessEvaluator implements BaseEvaluator {

356

constructor(options?: { llm?: LLM });

357

evaluate(query: string, response: string, contexts: string[]): Promise<EvaluationResult>;

358

}

359

360

class RelevancyEvaluator implements BaseEvaluator {

361

constructor(options?: { llm?: LLM });

362

evaluate(query: string, response: string, contexts?: string[]): Promise<EvaluationResult>;

363

}

364

365

class CorrectnessEvaluator implements BaseEvaluator {

366

constructor(options?: { llm?: LLM });

367

evaluate(query: string, response: string, contexts?: string[], referenceAnswer?: string): Promise<EvaluationResult>;

368

}

369

```

370

371

[Evaluation Framework](./evaluation.md)

372

373

## Types

374

375

### Core Schema Types

376

377

```typescript { .api }

378

abstract class BaseNode<T extends Metadata = Metadata> {

379

id_: string;

380

embedding?: number[];

381

metadata: T;

382

excludedEmbedMetadataKeys: string[];

383

excludedLlmMetadataKeys: string[];

384

relationships: Partial<Record<NodeRelationship, RelatedNodeType<T>>>;

385

hash: string;

386

387

abstract get type(): ObjectType;

388

abstract getContent(metadataMode?: MetadataMode): string;

389

abstract getMetadataStr(metadataMode?: MetadataMode): string;

390

abstract setContent(value: string): void;

391

}

392

393

class TextNode extends BaseNode {

394

constructor(init?: {

395

text?: string;

396

id_?: string;

397

metadata?: Record<string, any>;

398

embedding?: number[];

399

excludedEmbedMetadataKeys?: string[];

400

excludedLlmMetadataKeys?: string[];

401

relationships?: Partial<Record<NodeRelationship, RelatedNodeType>>;

402

});

403

404

text: string;

405

startCharIdx?: number;

406

endCharIdx?: number;

407

textTemplate: string;

408

metadataTemplate: string;

409

metadataSeparator: string;

410

}

411

412

class ImageNode extends BaseNode {

413

constructor(init?: {

414

image?: string;

415

text?: string;

416

mimetype?: string;

417

imageUrl?: string;

418

imagePath?: string;

419

id_?: string;

420

metadata?: Record<string, any>;

421

});

422

423

image?: string;

424

imageUrl?: string;

425

imagePath?: string;

426

imageMimetype?: string;

427

}

428

429

class Document extends TextNode {

430

constructor(init: {

431

text: string;

432

id_?: string;

433

metadata?: Record<string, any>;

434

excludedLlmMetadataKeys?: string[];

435

excludedEmbedMetadataKeys?: string[];

436

relationships?: Partial<Record<NodeRelationship, RelatedNodeType>>;

437

mimetype?: string;

438

textTemplate?: string;

439

});

440

441

docId?: string;

442

mimetype?: string;

443

}

444

445

class EngineResponse {

446

response: string;

447

sourceNodes?: NodeWithScore[];

448

metadata?: Record<string, any>;

449

450

toString(): string;

451

print(): void;

452

}

453

454

interface NodeWithScore {

455

node: BaseNode;

456

score?: number;

457

}

458

459

enum NodeRelationship {

460

SOURCE = "SOURCE",

461

PREVIOUS = "PREVIOUS",

462

NEXT = "NEXT",

463

PARENT = "PARENT",

464

CHILD = "CHILD",

465

}

466

467

enum ObjectType {

468

TEXT = "TEXT",

469

IMAGE = "IMAGE",

470

INDEX = "INDEX",

471

DOCUMENT = "DOCUMENT",

472

IMAGE_DOCUMENT = "IMAGE_DOCUMENT",

473

}

474

475

enum MetadataMode {

476

ALL = "ALL",

477

EMBED = "EMBED",

478

LLM = "LLM",

479

NONE = "NONE",

480

}

481

482

type Metadata = Record<string, any>;

483

484

interface RelatedNodeInfo<T extends Metadata = Metadata> {

485

nodeId: string;

486

nodeType?: ObjectType;

487

metadata: T;

488

hash?: string;

489

}

490

491

type RelatedNodeType<T extends Metadata = Metadata> =

492

| RelatedNodeInfo<T>

493

| RelatedNodeInfo<T>[];

494

```

495

496

### Configuration Types

497

498

```typescript { .api }

499

interface Config {

500

prompt: PromptConfig;

501

promptHelper: PromptHelper | null;

502

embedModel: BaseEmbedding | null;

503

nodeParser: NodeParser | null;

504

callbackManager: CallbackManager | null;

505

chunkSize: number | undefined;

506

chunkOverlap: number | undefined;

507

}

508

509

interface PromptConfig {

510

llm?: string;

511

lang?: string;

512

}

513

```

514

515

### Structured Output Types

516

517

```typescript { .api }

518

interface StructuredOutput<T> {

519

rawOutput: string;

520

parsedOutput: T;

521

}

522

523

type UUID = `${string}-${string}-${string}-${string}-${string}`;

524

525

type ToolMetadataOnlyDescription = Pick<ToolMetadata, "description">;

526

```

527

528

## Constants

529

530

```typescript { .api }

531

const DEFAULT_CHUNK_SIZE: 1024;

532

const DEFAULT_CHUNK_OVERLAP: 200;

533

const DEFAULT_CHUNK_OVERLAP_RATIO: 0.1;

534

const DEFAULT_CONTEXT_WINDOW: 3900;

535

const DEFAULT_NUM_OUTPUTS: 256;

536

const DEFAULT_SIMILARITY_TOP_K: 2;

537

const DEFAULT_PADDING: 5;

538

const DEFAULT_BASE_URL: string;

539

const DEFAULT_COLLECTION: string;

540

const DEFAULT_DOC_STORE_PERSIST_FILENAME: "docstore.json";

541

const DEFAULT_GRAPH_STORE_PERSIST_FILENAME: "graph_store.json";

542

const DEFAULT_INDEX_STORE_PERSIST_FILENAME: "index_store.json";

543

const DEFAULT_NAMESPACE: "default";

544

const DEFAULT_PERSIST_DIR: "./storage";

545

const DEFAULT_PROJECT_NAME: string;

546

const DEFAULT_VECTOR_STORE_PERSIST_FILENAME: "vector_store.json";

547

```

548

549

## Event System

550

551

```typescript { .api }

552

interface LlamaIndexEventMaps {

553

"llm-start": LLMStartEvent;

554

"llm-end": LLMEndEvent;

555

"llm-stream": LLMStreamEvent;

556

"llm-tool-call": LLMToolCallEvent;

557

"llm-tool-result": LLMToolResultEvent;

558

}

559

560

interface LLMStartEvent {

561

id: string;

562

timestamp: Date;

563

payload: {

564

messages: ChatMessage[];

565

additionalKwargs?: Record<string, any>;

566

};

567

}

568

569

interface LLMEndEvent {

570

id: string;

571

timestamp: Date;

572

payload: {

573

response: ChatResponse | CompletionResponse;

574

};

575

}

576

577

interface LLMStreamEvent {

578

id: string;

579

timestamp: Date;

580

payload: {

581

chunk: string;

582

snapshot: string;

583

};

584

}

585

586

interface LLMToolCallEvent {

587

id: string;

588

timestamp: Date;

589

payload: {

590

toolCall: ToolCall;

591

};

592

}

593

594

interface LLMToolResultEvent {

595

id: string;

596

timestamp: Date;

597

payload: {

598

toolResult: ToolOutput;

599

};

600

}

601

602

type JSONValue = string | number | boolean | null | { [key: string]: JSONValue } | JSONValue[];

603

type JSONObject = { [key: string]: JSONValue };

604

type JSONArray = JSONValue[];

605

```