or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

autoevals-adapter.mdclient.mddatasets.mdexperiments.mdindex.mdmedia.mdprompts.mdscores.md

prompts.mddocs/

0

# Prompt Management

1

2

The Prompt Management system provides comprehensive capabilities for creating, fetching, updating, and compiling prompts with built-in caching, variable substitution, and placeholder support. It supports both text-based and chat-based prompts with seamless LangChain integration.

3

4

## Capabilities

5

6

### Get Prompt

7

8

Retrieve a prompt by name with intelligent caching and fallback support.

9

10

```typescript { .api }

11

/**

12

* Retrieves a prompt by name with intelligent caching

13

*

14

* Caching behavior:

15

* - Fresh prompts are returned immediately from cache

16

* - Expired prompts are returned from cache while being refreshed in background

17

* - Cache misses trigger immediate fetch with optional fallback support

18

*

19

* @param name - Name of the prompt to retrieve

20

* @param options - Optional retrieval configuration

21

* @returns Promise that resolves to TextPromptClient or ChatPromptClient

22

*/

23

get(

24

name: string,

25

options?: {

26

/** Specific version to retrieve (defaults to latest) */

27

version?: number;

28

/** Label to filter by (defaults to "production") */

29

label?: string;

30

/** Cache TTL in seconds (default: 60, set to 0 to disable caching) */

31

cacheTtlSeconds?: number;

32

/** Fallback content if prompt fetch fails */

33

fallback?: string | ChatMessage[];

34

/** Maximum retry attempts for failed requests */

35

maxRetries?: number;

36

/** Prompt type (auto-detected if not specified) */

37

type?: "chat" | "text";

38

/** Request timeout in milliseconds */

39

fetchTimeoutMs?: number;

40

}

41

): Promise<TextPromptClient | ChatPromptClient>;

42

```

43

44

**Usage Examples:**

45

46

```typescript

47

import { LangfuseClient } from '@langfuse/client';

48

49

const langfuse = new LangfuseClient();

50

51

// Get latest version with default caching (60 seconds)

52

const prompt = await langfuse.prompt.get("my-prompt");

53

54

// Get specific version

55

const v2Prompt = await langfuse.prompt.get("my-prompt", {

56

version: 2

57

});

58

59

// Get with label filter

60

const prodPrompt = await langfuse.prompt.get("my-prompt", {

61

label: "production"

62

});

63

64

// Get with staging label

65

const stagingPrompt = await langfuse.prompt.get("my-prompt", {

66

label: "staging"

67

});

68

69

// Disable caching (always fetch fresh)

70

const freshPrompt = await langfuse.prompt.get("my-prompt", {

71

cacheTtlSeconds: 0

72

});

73

74

// Custom cache TTL (5 minutes)

75

const cachedPrompt = await langfuse.prompt.get("my-prompt", {

76

cacheTtlSeconds: 300

77

});

78

79

// With text fallback

80

const textPromptWithFallback = await langfuse.prompt.get("my-prompt", {

81

type: "text",

82

fallback: "Hello {{name}}! This is a fallback prompt."

83

});

84

85

// With chat fallback

86

const chatPromptWithFallback = await langfuse.prompt.get("conversation", {

87

type: "chat",

88

fallback: [

89

{ role: "system", content: "You are a helpful assistant" },

90

{ role: "user", content: "Hello {{name}}" }

91

]

92

});

93

94

// With retry configuration and timeout

95

const robustPrompt = await langfuse.prompt.get("my-prompt", {

96

maxRetries: 3,

97

fetchTimeoutMs: 5000

98

});

99

100

// Type-specific retrieval (enforces type at compile time)

101

const textPrompt = await langfuse.prompt.get("greeting", { type: "text" });

102

// textPrompt is TextPromptClient

103

104

const chatPrompt = await langfuse.prompt.get("conversation", { type: "chat" });

105

// chatPrompt is ChatPromptClient

106

```

107

108

**Caching Behavior:**

109

110

The prompt cache implements a sophisticated stale-while-revalidate pattern:

111

112

1. **Cache Hit (Fresh)**: Returns cached prompt immediately

113

2. **Cache Hit (Expired)**: Returns stale cached prompt immediately and refreshes in background

114

3. **Cache Miss**: Fetches from API immediately (with fallback support if fetch fails)

115

4. **Concurrent Requests**: Multiple concurrent requests for the same expired prompt trigger only one refresh

116

117

**Cache Keys:**

118

119

Cache keys are generated based on:

120

- Prompt name

121

- Version (if specified) or label (defaults to "production")

122

123

Examples:

124

- `my-prompt-label:production` (default)

125

- `my-prompt-version:2` (specific version)

126

- `my-prompt-label:staging` (specific label)

127

128

### Create Prompt

129

130

Create a new prompt or a new version of an existing prompt.

131

132

```typescript { .api }

133

/**

134

* Creates a new prompt in Langfuse

135

*

136

* Supports both text and chat prompts. Chat prompts can include placeholders

137

* for dynamic content insertion.

138

*

139

* @param body - The prompt data to create

140

* @returns Promise that resolves to TextPromptClient or ChatPromptClient

141

*/

142

create(body: CreatePromptRequest.Text): Promise<TextPromptClient>;

143

create(body: CreatePromptRequest.Chat): Promise<ChatPromptClient>;

144

create(body: CreateChatPromptBodyWithPlaceholders): Promise<ChatPromptClient>;

145

146

interface CreatePromptRequest.Text {

147

/** Unique name for the prompt */

148

name: string;

149

/** Text content with optional {{variable}} placeholders */

150

prompt: string;

151

/** Optional type specification (defaults to "text") */

152

type?: "text";

153

/** Configuration object (e.g., model settings) */

154

config?: unknown;

155

/** List of deployment labels for this prompt version */

156

labels?: string[];

157

/** List of tags to apply to all versions of this prompt */

158

tags?: string[];

159

/** Commit message for this prompt version */

160

commitMessage?: string;

161

}

162

163

interface CreatePromptRequest.Chat {

164

/** Unique name for the prompt */

165

name: string;

166

/** Chat prompt type */

167

type: "chat";

168

/** Array of chat messages and/or placeholders */

169

prompt: ChatMessageWithPlaceholders[];

170

/** Configuration object (e.g., model settings) */

171

config?: unknown;

172

/** List of deployment labels for this prompt version */

173

labels?: string[];

174

/** List of tags to apply to all versions of this prompt */

175

tags?: string[];

176

/** Commit message for this prompt version */

177

commitMessage?: string;

178

}

179

180

interface CreateChatPromptBodyWithPlaceholders {

181

type: "chat";

182

/** Array mixing regular chat messages and placeholder messages */

183

prompt: (ChatMessage | ChatMessageWithPlaceholders)[];

184

// ... other properties same as CreatePromptRequest.Chat

185

}

186

```

187

188

**Usage Examples:**

189

190

```typescript

191

import { LangfuseClient, ChatMessageType } from '@langfuse/client';

192

193

const langfuse = new LangfuseClient();

194

195

// Create a simple text prompt

196

const textPrompt = await langfuse.prompt.create({

197

name: "greeting",

198

prompt: "Hello {{name}}! Welcome to {{location}}.",

199

labels: ["production"],

200

config: {

201

temperature: 0.7,

202

model: "gpt-4"

203

}

204

});

205

206

// Create text prompt with tags

207

const taggedPrompt = await langfuse.prompt.create({

208

name: "sql-generator",

209

prompt: "Generate SQL for: {{task}}",

210

tags: ["database", "sql"],

211

labels: ["production"],

212

commitMessage: "Initial version of SQL generator"

213

});

214

215

// Create a chat prompt

216

const chatPrompt = await langfuse.prompt.create({

217

name: "assistant",

218

type: "chat",

219

prompt: [

220

{ role: "system", content: "You are a {{role}} assistant." },

221

{ role: "user", content: "{{user_message}}" }

222

],

223

labels: ["production"],

224

config: {

225

temperature: 0.8,

226

max_tokens: 1000

227

}

228

});

229

230

// Create chat prompt with placeholders

231

const chatWithPlaceholders = await langfuse.prompt.create({

232

name: "conversation-with-history",

233

type: "chat",

234

prompt: [

235

{ role: "system", content: "You are a helpful assistant." },

236

{ type: ChatMessageType.Placeholder, name: "conversation_history" },

237

{ role: "user", content: "{{current_question}}" }

238

],

239

labels: ["production"],

240

tags: ["conversational", "memory"],

241

commitMessage: "Added conversation history placeholder"

242

});

243

244

// Create multi-placeholder chat prompt

245

const complexChat = await langfuse.prompt.create({

246

name: "advanced-assistant",

247

type: "chat",

248

prompt: [

249

{ role: "system", content: "You are {{assistant_type}}. Context: {{context}}" },

250

{ type: ChatMessageType.Placeholder, name: "few_shot_examples" },

251

{ type: ChatMessageType.Placeholder, name: "conversation_history" },

252

{ role: "user", content: "{{user_query}}" }

253

],

254

labels: ["staging"],

255

config: {

256

model: "gpt-4-turbo",

257

temperature: 0.9

258

}

259

});

260

261

// Create versioned prompt

262

const v2Prompt = await langfuse.prompt.create({

263

name: "greeting", // Same name creates new version

264

prompt: "Hi {{name}}! Great to see you in {{location}}.",

265

labels: ["staging"],

266

commitMessage: "v2: Updated greeting style"

267

});

268

269

// Access created prompt properties

270

console.log(textPrompt.name); // "greeting"

271

console.log(textPrompt.version); // 1

272

console.log(textPrompt.type); // "text"

273

console.log(textPrompt.prompt); // "Hello {{name}}! ..."

274

console.log(textPrompt.config); // { temperature: 0.7, model: "gpt-4" }

275

console.log(textPrompt.labels); // ["production"]

276

console.log(textPrompt.tags); // []

277

console.log(textPrompt.isFallback); // false

278

```

279

280

### Update Prompt

281

282

Update the labels of an existing prompt version.

283

284

```typescript { .api }

285

/**

286

* Updates the labels of an existing prompt version

287

*

288

* After updating, the prompt cache is automatically invalidated

289

* to ensure fresh data on next fetch.

290

*

291

* @param params - Update parameters

292

* @returns Promise that resolves to the updated Prompt

293

*/

294

update(params: {

295

/** Name of the prompt to update */

296

name: string;

297

/** Version number of the prompt to update */

298

version: number;

299

/** New labels to apply to the prompt version */

300

newLabels: string[];

301

}): Promise<Prompt>;

302

```

303

304

**Usage Examples:**

305

306

```typescript

307

import { LangfuseClient } from '@langfuse/client';

308

309

const langfuse = new LangfuseClient();

310

311

// Create a prompt first

312

const prompt = await langfuse.prompt.create({

313

name: "my-prompt",

314

prompt: "Hello {{name}}",

315

labels: ["staging"]

316

});

317

318

// Promote to production

319

const updatedPrompt = await langfuse.prompt.update({

320

name: "my-prompt",

321

version: prompt.version,

322

newLabels: ["production"]

323

});

324

325

// Add multiple labels

326

const multiLabelPrompt = await langfuse.prompt.update({

327

name: "my-prompt",

328

version: prompt.version,

329

newLabels: ["production", "stable", "v1.0"]

330

});

331

332

// Move from production to staging

333

const downgraded = await langfuse.prompt.update({

334

name: "my-prompt",

335

version: 2,

336

newLabels: ["staging"]

337

});

338

339

// Cache invalidation happens automatically

340

// Next get() call will fetch fresh data

341

const freshPrompt = await langfuse.prompt.get("my-prompt");

342

```

343

344

## Prompt Clients

345

346

### TextPromptClient

347

348

Client for working with text-based prompts, providing compilation and LangChain conversion.

349

350

```typescript { .api }

351

class TextPromptClient {

352

/** The name of the prompt */

353

readonly name: string;

354

355

/** The version number of the prompt */

356

readonly version: number;

357

358

/** The text content of the prompt with {{variable}} placeholders */

359

readonly prompt: string;

360

361

/** Configuration object associated with the prompt */

362

readonly config: unknown;

363

364

/** Labels associated with the prompt */

365

readonly labels: string[];

366

367

/** Tags associated with the prompt */

368

readonly tags: string[];

369

370

/** Whether this prompt client is using fallback content */

371

readonly isFallback: boolean;

372

373

/** The type of prompt (always "text") */

374

readonly type: "text";

375

376

/** Optional commit message for the prompt version */

377

readonly commitMessage: string | null | undefined;

378

379

/** The original prompt response from the API */

380

readonly promptResponse: Prompt.Text;

381

382

/** The dependency resolution graph for the current prompt (null if prompt has no dependencies) */

383

readonly resolutionGraph?: Record<string, unknown>;

384

385

/**

386

* Compiles the text prompt by substituting variables

387

*

388

* Uses Mustache templating to replace {{variable}} placeholders with provided values.

389

*

390

* @param variables - Key-value pairs for variable substitution

391

* @returns The compiled text with variables substituted

392

*/

393

compile(variables?: Record<string, string>): string;

394

395

/**

396

* Converts the prompt to LangChain PromptTemplate format

397

*

398

* Transforms Mustache-style {{variable}} syntax to LangChain's {variable} format.

399

* JSON braces are automatically escaped to avoid conflicts with variables.

400

*

401

* @returns The prompt string compatible with LangChain PromptTemplate

402

*/

403

getLangchainPrompt(): string;

404

405

/**

406

* Serializes the prompt client to JSON

407

*

408

* @returns JSON string representation of the prompt

409

*/

410

toJSON(): string;

411

}

412

```

413

414

**Usage Examples:**

415

416

```typescript

417

import { LangfuseClient } from '@langfuse/client';

418

import { PromptTemplate } from '@langchain/core/prompts';

419

420

const langfuse = new LangfuseClient();

421

422

// Get a text prompt

423

const prompt = await langfuse.prompt.get("greeting", { type: "text" });

424

425

// Access prompt properties

426

console.log(prompt.name); // "greeting"

427

console.log(prompt.version); // 1

428

console.log(prompt.type); // "text"

429

console.log(prompt.prompt); // "Hello {{name}}! ..."

430

console.log(prompt.config); // { temperature: 0.7 }

431

console.log(prompt.labels); // ["production"]

432

console.log(prompt.tags); // ["greeting", "onboarding"]

433

console.log(prompt.isFallback); // false

434

console.log(prompt.commitMessage); // "Initial version"

435

436

// Compile with variable substitution

437

const compiled = prompt.compile({

438

name: "Alice",

439

location: "New York"

440

});

441

console.log(compiled);

442

// "Hello Alice! Welcome to New York."

443

444

// Compile with partial variables (unmatched remain as {{variable}})

445

const partial = prompt.compile({ name: "Bob" });

446

// "Hello Bob! Welcome to {{location}}."

447

448

// Convert to LangChain format

449

const langchainFormat = prompt.getLangchainPrompt();

450

console.log(langchainFormat);

451

// "Hello {name}! Welcome to {location}." ({{}} -> {})

452

453

// Use with LangChain

454

const langchainPrompt = PromptTemplate.fromTemplate(

455

prompt.getLangchainPrompt()

456

);

457

const result = await langchainPrompt.format({

458

name: "Alice",

459

location: "Paris"

460

});

461

462

// Serialize to JSON

463

const json = prompt.toJSON();

464

const parsed = JSON.parse(json);

465

console.log(parsed);

466

// {

467

// name: "greeting",

468

// prompt: "Hello {{name}}! ...",

469

// version: 1,

470

// type: "text",

471

// config: { temperature: 0.7 },

472

// labels: ["production"],

473

// tags: ["greeting"],

474

// isFallback: false

475

// }

476

477

// Handle JSON in prompt content

478

const jsonPrompt = await langfuse.prompt.create({

479

name: "json-template",

480

prompt: `Generate JSON for {{task}}:

481

{

482

"user": "{{username}}",

483

"task": "{{task}}",

484

"metadata": {

485

"timestamp": "{{timestamp}}"

486

}

487

}`

488

});

489

490

// LangChain conversion automatically escapes JSON braces

491

const langchainJson = jsonPrompt.getLangchainPrompt();

492

// JSON braces are doubled, variable braces become single:

493

// {{ }} for JSON becomes {{ }}

494

// {{variable}} becomes {variable}

495

496

const langchainTemplate = PromptTemplate.fromTemplate(langchainJson);

497

const formatted = await langchainTemplate.format({

498

task: "analysis",

499

username: "alice",

500

timestamp: "2024-01-01"

501

});

502

```

503

504

### ChatPromptClient

505

506

Client for working with chat-based prompts, providing compilation, placeholder resolution, and LangChain conversion.

507

508

```typescript { .api }

509

class ChatPromptClient {

510

/** The name of the prompt */

511

readonly name: string;

512

513

/** The version number of the prompt */

514

readonly version: number;

515

516

/** The chat messages that make up the prompt */

517

readonly prompt: ChatMessageWithPlaceholders[];

518

519

/** Configuration object associated with the prompt */

520

readonly config: unknown;

521

522

/** Labels associated with the prompt */

523

readonly labels: string[];

524

525

/** Tags associated with the prompt */

526

readonly tags: string[];

527

528

/** Whether this prompt client is using fallback content */

529

readonly isFallback: boolean;

530

531

/** The type of prompt (always "chat") */

532

readonly type: "chat";

533

534

/** Optional commit message for the prompt version */

535

readonly commitMessage: string | null | undefined;

536

537

/** The original prompt response from the API */

538

readonly promptResponse: Prompt.Chat;

539

540

/** The dependency resolution graph for the current prompt (null if prompt has no dependencies) */

541

readonly resolutionGraph?: Record<string, unknown>;

542

543

/**

544

* Compiles the chat prompt by replacing placeholders and variables

545

*

546

* First resolves placeholders with provided values, then applies variable substitution

547

* to message content using Mustache templating. Unresolved placeholders remain

548

* as placeholder objects in the output.

549

*

550

* @param variables - Key-value pairs for Mustache variable substitution in message content

551

* @param placeholders - Key-value pairs where keys are placeholder names and values are ChatMessage arrays

552

* @returns Array of ChatMessage objects and unresolved placeholder objects

553

*/

554

compile(

555

variables?: Record<string, string>,

556

placeholders?: Record<string, any>

557

): (ChatMessageOrPlaceholder | any)[];

558

559

/**

560

* Converts the prompt to LangChain ChatPromptTemplate format

561

*

562

* Resolves placeholders with provided values and converts unresolved ones

563

* to LangChain MessagesPlaceholder objects. Transforms variables from

564

* {{var}} to {var} format without rendering them.

565

*

566

* @param options - Configuration object

567

* @param options.placeholders - Key-value pairs for placeholder resolution

568

* @returns Array of ChatMessage objects and LangChain MessagesPlaceholder objects

569

*/

570

getLangchainPrompt(options?: {

571

placeholders?: Record<string, any>;

572

}): (ChatMessage | LangchainMessagesPlaceholder | any)[];

573

574

/**

575

* Serializes the prompt client to JSON

576

*

577

* @returns JSON string representation of the prompt

578

*/

579

toJSON(): string;

580

}

581

```

582

583

**Usage Examples:**

584

585

```typescript

586

import { LangfuseClient, ChatMessageType } from '@langfuse/client';

587

import { ChatPromptTemplate } from '@langchain/core/prompts';

588

589

const langfuse = new LangfuseClient();

590

591

// Get a chat prompt

592

const prompt = await langfuse.prompt.get("conversation", { type: "chat" });

593

594

// Access prompt properties

595

console.log(prompt.name); // "conversation"

596

console.log(prompt.version); // 1

597

console.log(prompt.type); // "chat"

598

console.log(prompt.prompt); // Array of ChatMessageWithPlaceholders

599

console.log(prompt.config); // { temperature: 0.8, model: "gpt-4" }

600

console.log(prompt.labels); // ["production"]

601

console.log(prompt.tags); // ["conversational"]

602

console.log(prompt.isFallback); // false

603

604

// Compile with variable substitution only

605

const compiledMessages = prompt.compile({

606

user_name: "Alice",

607

assistant_type: "helpful"

608

});

609

console.log(compiledMessages);

610

// [

611

// { role: "system", content: "You are a helpful assistant." },

612

// { type: "placeholder", name: "history" }, // Unresolved placeholder

613

// { role: "user", content: "Hello Alice!" }

614

// ]

615

616

// Compile with variables and placeholders

617

const fullyCompiled = prompt.compile(

618

{ user_name: "Alice", assistant_type: "helpful" },

619

{

620

history: [

621

{ role: "user", content: "Previous question" },

622

{ role: "assistant", content: "Previous answer" }

623

]

624

}

625

);

626

console.log(fullyCompiled);

627

// [

628

// { role: "system", content: "You are a helpful assistant." },

629

// { role: "user", content: "Previous question" },

630

// { role: "assistant", content: "Previous answer" },

631

// { role: "user", content: "Hello Alice!" }

632

// ]

633

634

// Empty placeholder array removes placeholder

635

const noHistory = prompt.compile(

636

{ user_name: "Bob" },

637

{ history: [] } // Empty array - placeholder omitted

638

);

639

// Placeholder is removed from output

640

641

// Convert to LangChain format (unresolved placeholders)

642

const langchainMessages = prompt.getLangchainPrompt();

643

console.log(langchainMessages);

644

// [

645

// { role: "system", content: "You are a {assistant_type} assistant." },

646

// ["placeholder", "{history}"], // LangChain MessagesPlaceholder format

647

// { role: "user", content: "Hello {user_name}!" }

648

// ]

649

650

// Convert to LangChain format (with placeholder resolution)

651

const resolvedLangchain = prompt.getLangchainPrompt({

652

placeholders: {

653

history: [

654

{ role: "user", content: "Hi" },

655

{ role: "assistant", content: "Hello!" }

656

]

657

}

658

});

659

console.log(resolvedLangchain);

660

// [

661

// { role: "system", content: "You are a {assistant_type} assistant." },

662

// { role: "user", content: "Hi" },

663

// { role: "assistant", content: "Hello!" },

664

// { role: "user", content: "Hello {user_name}!" }

665

// ]

666

667

// Use with LangChain

668

const langchainPrompt = ChatPromptTemplate.fromMessages(

669

prompt.getLangchainPrompt()

670

);

671

const formatted = await langchainPrompt.formatMessages({

672

assistant_type: "knowledgeable",

673

user_name: "Alice",

674

history: [

675

{ role: "user", content: "What is AI?" },

676

{ role: "assistant", content: "AI stands for Artificial Intelligence." }

677

]

678

});

679

680

// Multi-placeholder example

681

const complexPrompt = await langfuse.prompt.create({

682

name: "multi-placeholder",

683

type: "chat",

684

prompt: [

685

{ role: "system", content: "You are {{role}}." },

686

{ type: ChatMessageType.Placeholder, name: "examples" },

687

{ type: ChatMessageType.Placeholder, name: "history" },

688

{ role: "user", content: "{{query}}" }

689

]

690

});

691

692

const compiled = complexPrompt.compile(

693

{ role: "expert", query: "Help me" },

694

{

695

examples: [

696

{ role: "user", content: "Example Q" },

697

{ role: "assistant", content: "Example A" }

698

],

699

history: [

700

{ role: "user", content: "Previous Q" },

701

{ role: "assistant", content: "Previous A" }

702

]

703

}

704

);

705

// All placeholders resolved, variables substituted

706

707

// Serialize to JSON

708

const json = prompt.toJSON();

709

const parsed = JSON.parse(json);

710

console.log(parsed);

711

// {

712

// name: "conversation",

713

// prompt: [

714

// { role: "system", content: "You are {{assistant_type}} assistant." },

715

// { type: "placeholder", name: "history" },

716

// { role: "user", content: "Hello {{user_name}}!" }

717

// ],

718

// version: 1,

719

// type: "chat",

720

// config: { temperature: 0.8 },

721

// labels: ["production"],

722

// tags: ["conversational"],

723

// isFallback: false

724

// }

725

```

726

727

## Type Definitions

728

729

### ChatMessageType

730

731

Enumeration of chat message types in prompts.

732

733

```typescript { .api }

734

enum ChatMessageType {

735

/** Regular chat message with role and content */

736

ChatMessage = "chatmessage",

737

738

/** Placeholder for dynamic content insertion */

739

Placeholder = "placeholder"

740

}

741

```

742

743

**Usage Examples:**

744

745

```typescript

746

import { ChatMessageType } from '@langfuse/client';

747

748

// Use in prompt creation

749

const prompt = await langfuse.prompt.create({

750

name: "with-placeholder",

751

type: "chat",

752

prompt: [

753

{ role: "system", content: "System message" },

754

{ type: ChatMessageType.Placeholder, name: "dynamic_content" },

755

{ role: "user", content: "User message" }

756

]

757

});

758

759

// Check message type

760

for (const message of prompt.prompt) {

761

if ('type' in message && message.type === ChatMessageType.Placeholder) {

762

console.log(`Found placeholder: ${message.name}`);

763

} else if ('type' in message && message.type === ChatMessageType.ChatMessage) {

764

console.log(`Found message: ${message.role} - ${message.content}`);

765

}

766

}

767

```

768

769

### ChatMessage

770

771

Represents a standard chat message with role and content.

772

773

```typescript { .api }

774

interface ChatMessage {

775

/** The role of the message sender (e.g., "system", "user", "assistant") */

776

role: string;

777

778

/** The content of the message */

779

content: string;

780

}

781

```

782

783

**Usage Examples:**

784

785

```typescript

786

import type { ChatMessage } from '@langfuse/client';

787

788

// Create chat messages

789

const messages: ChatMessage[] = [

790

{ role: "system", content: "You are helpful" },

791

{ role: "user", content: "Hello" },

792

{ role: "assistant", content: "Hi there!" }

793

];

794

795

// Use as placeholder values

796

const compiled = chatPrompt.compile(

797

{ name: "Alice" },

798

{ history: messages }

799

);

800

801

// Use as fallback

802

const prompt = await langfuse.prompt.get("chat", {

803

type: "chat",

804

fallback: messages

805

});

806

```

807

808

### ChatMessageWithPlaceholders

809

810

Union type for chat messages that can include placeholders.

811

812

```typescript { .api }

813

type ChatMessageWithPlaceholders =

814

| { type: "chatmessage"; role: string; content: string }

815

| { type: "placeholder"; name: string };

816

```

817

818

### ChatMessageOrPlaceholder

819

820

Union type representing either a chat message or a placeholder.

821

822

```typescript { .api }

823

type ChatMessageOrPlaceholder =

824

| ChatMessage

825

| ({ type: ChatMessageType.Placeholder } & PlaceholderMessage);

826

```

827

828

**Usage Examples:**

829

830

```typescript

831

import type { ChatMessageOrPlaceholder } from '@langfuse/client';

832

833

// Return type of compile() method

834

const compiled: ChatMessageOrPlaceholder[] = chatPrompt.compile(

835

{ user: "Alice" },

836

{ history: [] }

837

);

838

839

// Filter messages and placeholders

840

const actualMessages = compiled.filter(

841

(item): item is ChatMessage =>

842

'role' in item && 'content' in item

843

);

844

845

const placeholders = compiled.filter(

846

(item): item is { type: ChatMessageType.Placeholder; name: string } =>

847

'type' in item && item.type === ChatMessageType.Placeholder

848

);

849

```

850

851

### PlaceholderMessage

852

853

Represents a placeholder for dynamic content insertion.

854

855

```typescript { .api }

856

interface PlaceholderMessage {

857

/** Name of the placeholder variable */

858

name: string;

859

}

860

```

861

862

**Usage Examples:**

863

864

```typescript

865

import { ChatMessageType } from '@langfuse/client';

866

import type { PlaceholderMessage } from '@langfuse/core';

867

868

// Create placeholder in prompt

869

const placeholder: PlaceholderMessage & { type: ChatMessageType.Placeholder } = {

870

type: ChatMessageType.Placeholder,

871

name: "conversation_history"

872

};

873

874

// Use in prompt creation

875

const prompt = await langfuse.prompt.create({

876

name: "with-history",

877

type: "chat",

878

prompt: [

879

{ role: "system", content: "System" },

880

placeholder,

881

{ role: "user", content: "Query" }

882

]

883

});

884

```

885

886

### LangchainMessagesPlaceholder

887

888

Represents a LangChain MessagesPlaceholder object for unresolved placeholders.

889

890

```typescript { .api }

891

type LangchainMessagesPlaceholder = {

892

/** Name of the variable that will provide the messages */

893

variableName: string;

894

895

/** Whether the placeholder is optional (defaults to false) */

896

optional?: boolean;

897

};

898

```

899

900

**Usage Examples:**

901

902

```typescript

903

import type { LangchainMessagesPlaceholder } from '@langfuse/client';

904

905

// getLangchainPrompt() returns this format for unresolved placeholders

906

const langchainMessages = chatPrompt.getLangchainPrompt();

907

908

// LangChain MessagesPlaceholder is represented as tuple

909

// ["placeholder", "{variableName}"]

910

const placeholderTuple = langchainMessages.find(

911

item => Array.isArray(item) && item[0] === "placeholder"

912

);

913

// ["placeholder", "{history}"]

914

915

// Use directly with LangChain

916

import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';

917

918

const langchainPrompt = ChatPromptTemplate.fromMessages(

919

chatPrompt.getLangchainPrompt()

920

);

921

922

// MessagesPlaceholder automatically created for unresolved placeholders

923

await langchainPrompt.formatMessages({

924

history: [

925

{ role: "user", content: "Question" },

926

{ role: "assistant", content: "Answer" }

927

]

928

});

929

```

930

931

### CreateChatPromptBodyWithPlaceholders

932

933

Type for creating chat prompts that support both regular messages and placeholders.

934

935

```typescript { .api }

936

type CreateChatPromptBodyWithPlaceholders = {

937

/** Specifies this is a chat prompt */

938

type: "chat";

939

940

/** Unique name for the prompt */

941

name: string;

942

943

/** Array of chat messages and/or placeholders */

944

prompt: (ChatMessage | ChatMessageWithPlaceholders)[];

945

946

/** Configuration object (e.g., model settings) */

947

config?: unknown;

948

949

/** List of deployment labels for this prompt version */

950

labels?: string[];

951

952

/** List of tags to apply to all versions of this prompt */

953

tags?: string[];

954

955

/** Commit message for this prompt version */

956

commitMessage?: string;

957

};

958

```

959

960

**Usage Examples:**

961

962

```typescript

963

import type { CreateChatPromptBodyWithPlaceholders } from '@langfuse/client';

964

import { ChatMessageType } from '@langfuse/client';

965

966

// Create prompt with mixed message types

967

const promptBody: CreateChatPromptBodyWithPlaceholders = {

968

name: "flexible-chat",

969

type: "chat",

970

prompt: [

971

// Regular chat message (no type field needed)

972

{ role: "system", content: "You are {{role}}" },

973

// Explicit placeholder

974

{ type: ChatMessageType.Placeholder, name: "examples" },

975

// Another regular message

976

{ role: "user", content: "{{query}}" }

977

],

978

labels: ["production"],

979

config: { temperature: 0.7 }

980

};

981

982

const created = await langfuse.prompt.create(promptBody);

983

984

// Backwards compatible: ChatMessage objects automatically get type field

985

const simpleBody = {

986

name: "simple-chat",

987

type: "chat" as const,

988

prompt: [

989

{ role: "user", content: "Hello" }

990

// Automatically converted to { type: "chatmessage", role: "user", content: "Hello" }

991

]

992

};

993

```

994

995

### LangfusePromptClient

996

997

Union type representing either a text or chat prompt client.

998

999

```typescript { .api }

1000

type LangfusePromptClient = TextPromptClient | ChatPromptClient;

1001

```

1002

1003

**Usage Examples:**

1004

1005

```typescript

1006

import type { LangfusePromptClient } from '@langfuse/client';

1007

1008

// Return type of get() without type specification

1009

const prompt: LangfusePromptClient = await langfuse.prompt.get("unknown-type");

1010

1011

// Type narrowing

1012

if (prompt.type === "text") {

1013

// prompt is TextPromptClient

1014

const compiled = prompt.compile({ name: "Alice" });

1015

} else {

1016

// prompt is ChatPromptClient

1017

const compiled = prompt.compile(

1018

{ name: "Alice" },

1019

{ history: [] }

1020

);

1021

}

1022

1023

// Type guard function

1024

function isTextPrompt(prompt: LangfusePromptClient): prompt is TextPromptClient {

1025

return prompt.type === "text";

1026

}

1027

1028

function isChatPrompt(prompt: LangfusePromptClient): prompt is ChatPromptClient {

1029

return prompt.type === "chat";

1030

}

1031

1032

// Use type guards

1033

if (isTextPrompt(prompt)) {

1034

console.log("Text prompt:", prompt.prompt);

1035

} else if (isChatPrompt(prompt)) {

1036

console.log("Chat prompt with", prompt.prompt.length, "messages");

1037

}

1038

```

1039

1040

## Advanced Usage

1041

1042

### Variable Substitution

1043

1044

Prompts support Mustache-style variable substitution with `{{variable}}` syntax.

1045

1046

**Text Prompts:**

1047

1048

```typescript

1049

const prompt = await langfuse.prompt.create({

1050

name: "template",

1051

prompt: "Hello {{name}}! You have {{count}} new messages."

1052

});

1053

1054

// Compile with all variables

1055

const full = prompt.compile({

1056

name: "Alice",

1057

count: "5"

1058

});

1059

// "Hello Alice! You have 5 new messages."

1060

1061

// Partial compilation

1062

const partial = prompt.compile({ name: "Bob" });

1063

// "Hello Bob! You have {{count}} new messages."

1064

1065

// No escaping - JSON safe

1066

const jsonPrompt = prompt.compile({

1067

data: JSON.stringify({ key: "value" })

1068

});

1069

// Special characters are not HTML-escaped

1070

```

1071

1072

**Chat Prompts:**

1073

1074

```typescript

1075

const chatPrompt = await langfuse.prompt.create({

1076

name: "chat-template",

1077

type: "chat",

1078

prompt: [

1079

{ role: "system", content: "You are {{role}}" },

1080

{ role: "user", content: "Help with {{task}}" }

1081

]

1082

});

1083

1084

const compiled = chatPrompt.compile({

1085

role: "expert",

1086

task: "coding"

1087

});

1088

// [

1089

// { role: "system", content: "You are expert" },

1090

// { role: "user", content: "Help with coding" }

1091

// ]

1092

```

1093

1094

### Placeholder Resolution

1095

1096

Chat prompts support placeholders for dynamic message arrays.

1097

1098

**Basic Placeholder Usage:**

1099

1100

```typescript

1101

const prompt = await langfuse.prompt.create({

1102

name: "with-history",

1103

type: "chat",

1104

prompt: [

1105

{ role: "system", content: "You are helpful" },

1106

{ type: ChatMessageType.Placeholder, name: "history" },

1107

{ role: "user", content: "{{query}}" }

1108

]

1109

});

1110

1111

// Resolve placeholder

1112

const compiled = prompt.compile(

1113

{ query: "What is AI?" },

1114

{

1115

history: [

1116

{ role: "user", content: "Previous question" },

1117

{ role: "assistant", content: "Previous answer" }

1118

]

1119

}

1120

);

1121

// Placeholder replaced with provided messages

1122

1123

// Leave placeholder unresolved

1124

const withPlaceholder = prompt.compile({ query: "What is AI?" });

1125

// Placeholder remains in output as { type: "placeholder", name: "history" }

1126

1127

// Remove placeholder with empty array

1128

const noHistory = prompt.compile(

1129

{ query: "What is AI?" },

1130

{ history: [] }

1131

);

1132

// Placeholder is omitted from output

1133

```

1134

1135

**Multiple Placeholders:**

1136

1137

```typescript

1138

const multiPlaceholder = await langfuse.prompt.create({

1139

name: "multi",

1140

type: "chat",

1141

prompt: [

1142

{ role: "system", content: "System" },

1143

{ type: ChatMessageType.Placeholder, name: "examples" },

1144

{ type: ChatMessageType.Placeholder, name: "history" },

1145

{ role: "user", content: "{{query}}" }

1146

]

1147

});

1148

1149

const compiled = multiPlaceholder.compile(

1150

{ query: "Help me" },

1151

{

1152

examples: [

1153

{ role: "user", content: "Example Q" },

1154

{ role: "assistant", content: "Example A" }

1155

],

1156

history: [

1157

{ role: "user", content: "Previous Q" },

1158

{ role: "assistant", content: "Previous A" }

1159

]

1160

}

1161

);

1162

// Both placeholders resolved in order

1163

```

1164

1165

**Invalid Placeholder Values:**

1166

1167

```typescript

1168

// Non-array placeholder values are stringified

1169

const invalid = prompt.compile(

1170

{ query: "Test" },

1171

{ history: "not an array" } // Invalid type

1172

);

1173

// Invalid value is JSON.stringified: '"not an array"'

1174

```

1175

1176

### LangChain Integration

1177

1178

Seamless integration with LangChain prompt templates.

1179

1180

**Text Prompts with LangChain:**

1181

1182

```typescript

1183

import { PromptTemplate } from '@langchain/core/prompts';

1184

1185

const textPrompt = await langfuse.prompt.get("greeting", { type: "text" });

1186

1187

// Convert to LangChain format ({{var}} -> {var})

1188

const langchainFormat = textPrompt.getLangchainPrompt();

1189

1190

// Create LangChain template

1191

const template = PromptTemplate.fromTemplate(langchainFormat);

1192

1193

// Format with LangChain

1194

const result = await template.format({

1195

name: "Alice",

1196

location: "Paris"

1197

});

1198

1199

// JSON handling - braces are automatically escaped

1200

const jsonPrompt = await langfuse.prompt.create({

1201

name: "json-template",

1202

prompt: `{

1203

"user": "{{username}}",

1204

"metadata": {

1205

"timestamp": "{{timestamp}}"

1206

}

1207

}`

1208

});

1209

1210

const langchainJson = jsonPrompt.getLangchainPrompt();

1211

// JSON braces doubled {{}}, variable braces single {variable}

1212

const jsonTemplate = PromptTemplate.fromTemplate(langchainJson);

1213

const formatted = await jsonTemplate.format({

1214

username: "alice",

1215

timestamp: "2024-01-01"

1216

});

1217

// Valid JSON with variables substituted

1218

```

1219

1220

**Chat Prompts with LangChain:**

1221

1222

```typescript

1223

import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';

1224

1225

const chatPrompt = await langfuse.prompt.get("conversation", { type: "chat" });

1226

1227

// Convert to LangChain format

1228

const langchainMessages = chatPrompt.getLangchainPrompt();

1229

1230

// Create LangChain chat template

1231

const template = ChatPromptTemplate.fromMessages(langchainMessages);

1232

1233

// Format with LangChain

1234

const formatted = await template.formatMessages({

1235

role: "helpful",

1236

query: "What is AI?",

1237

history: [

1238

{ role: "user", content: "Previous Q" },

1239

{ role: "assistant", content: "Previous A" }

1240

]

1241

});

1242

1243

// Unresolved placeholders become MessagesPlaceholder

1244

const withPlaceholder = await langfuse.prompt.create({

1245

name: "with-placeholder",

1246

type: "chat",

1247

prompt: [

1248

{ role: "system", content: "System" },

1249

{ type: ChatMessageType.Placeholder, name: "history" },

1250

{ role: "user", content: "{query}" }

1251

]

1252

});

1253

1254

const langchainFormat = withPlaceholder.getLangchainPrompt();

1255

// [

1256

// { role: "system", content: "System" },

1257

// ["placeholder", "{history}"], // LangChain MessagesPlaceholder tuple

1258

// { role: "user", content: "{query}" }

1259

// ]

1260

1261

const promptTemplate = ChatPromptTemplate.fromMessages(langchainFormat);

1262

// MessagesPlaceholder automatically created for "history" variable

1263

```

1264

1265

**Resolved vs Unresolved Placeholders:**

1266

1267

```typescript

1268

// Option 1: Resolve before LangChain conversion

1269

const resolved = chatPrompt.getLangchainPrompt({

1270

placeholders: {

1271

history: [

1272

{ role: "user", content: "Hi" },

1273

{ role: "assistant", content: "Hello" }

1274

]

1275

}

1276

});

1277

// Placeholder replaced with messages

1278

1279

// Option 2: Leave unresolved, let LangChain handle it

1280

const unresolved = chatPrompt.getLangchainPrompt();

1281

// Placeholder becomes MessagesPlaceholder

1282

1283

const template = ChatPromptTemplate.fromMessages(unresolved);

1284

await template.formatMessages({

1285

history: [/* messages */], // Provided at format time

1286

// other variables

1287

});

1288

```

1289

1290

### Caching Strategies

1291

1292

Optimize performance with intelligent caching.

1293

1294

**Default Caching (60 seconds):**

1295

1296

```typescript

1297

// First call fetches from API and caches

1298

const prompt1 = await langfuse.prompt.get("my-prompt");

1299

1300

// Second call within 60 seconds uses cache (no API call)

1301

const prompt2 = await langfuse.prompt.get("my-prompt");

1302

```

1303

1304

**Custom Cache TTL:**

1305

1306

```typescript

1307

// Cache for 5 minutes

1308

const longCached = await langfuse.prompt.get("my-prompt", {

1309

cacheTtlSeconds: 300

1310

});

1311

1312

// Cache for 1 hour

1313

const veryLongCached = await langfuse.prompt.get("my-prompt", {

1314

cacheTtlSeconds: 3600

1315

});

1316

```

1317

1318

**Disable Caching:**

1319

1320

```typescript

1321

// Always fetch fresh (no caching)

1322

const fresh = await langfuse.prompt.get("my-prompt", {

1323

cacheTtlSeconds: 0

1324

});

1325

```

1326

1327

**Stale-While-Revalidate Pattern:**

1328

1329

```typescript

1330

// After cache expires, returns stale cache while fetching fresh in background

1331

const prompt = await langfuse.prompt.get("my-prompt");

1332

// If cache is expired:

1333

// 1. Returns old cached version immediately

1334

// 2. Fetches fresh version in background

1335

// 3. Updates cache for next request

1336

1337

// Concurrent requests to expired cache trigger only one refresh

1338

const promises = [

1339

langfuse.prompt.get("my-prompt"),

1340

langfuse.prompt.get("my-prompt"),

1341

langfuse.prompt.get("my-prompt")

1342

];

1343

const results = await Promise.all(promises);

1344

// Only one API call made, all get the same result

1345

```

1346

1347

**Cache Invalidation:**

1348

1349

```typescript

1350

// Cache is automatically invalidated on update

1351

await langfuse.prompt.update({

1352

name: "my-prompt",

1353

version: 1,

1354

newLabels: ["production"]

1355

});

1356

1357

// Next get() fetches fresh data

1358

const fresh = await langfuse.prompt.get("my-prompt");

1359

```

1360

1361

**Cache Keys:**

1362

1363

```typescript

1364

// Different cache keys for different retrieval options

1365

1366

// Key: "my-prompt-label:production"

1367

await langfuse.prompt.get("my-prompt");

1368

1369

// Key: "my-prompt-label:staging" (different key)

1370

await langfuse.prompt.get("my-prompt", { label: "staging" });

1371

1372

// Key: "my-prompt-version:2" (different key)

1373

await langfuse.prompt.get("my-prompt", { version: 2 });

1374

1375

// Each has independent cache

1376

```

1377

1378

### Fallback Handling

1379

1380

Provide fallback content when prompt fetch fails.

1381

1382

**Text Fallback:**

1383

1384

```typescript

1385

const prompt = await langfuse.prompt.get("my-prompt", {

1386

type: "text",

1387

fallback: "Default greeting: Hello {{name}}!"

1388

});

1389

1390

// If "my-prompt" doesn't exist or fetch fails:

1391

// - Returns TextPromptClient with fallback content

1392

// - isFallback property is true

1393

// - version is 0

1394

// - labels reflect the provided label option or default

1395

1396

if (prompt.isFallback) {

1397

console.log("Using fallback content");

1398

}

1399

```

1400

1401

**Chat Fallback:**

1402

1403

```typescript

1404

const chatPrompt = await langfuse.prompt.get("conversation", {

1405

type: "chat",

1406

fallback: [

1407

{ role: "system", content: "You are a helpful assistant" },

1408

{ role: "user", content: "Hello {{name}}" }

1409

]

1410

});

1411

1412

// If "conversation" doesn't exist or fetch fails:

1413

// - Returns ChatPromptClient with fallback messages

1414

// - isFallback property is true

1415

// - version is 0

1416

1417

if (chatPrompt.isFallback) {

1418

console.warn("Prompt fetch failed, using fallback");

1419

}

1420

```

1421

1422

**Fallback Best Practices:**

1423

1424

```typescript

1425

// Production safety: always provide fallback

1426

const productionPrompt = await langfuse.prompt.get("critical-prompt", {

1427

type: "text",

1428

fallback: "Safe default prompt",

1429

maxRetries: 3,

1430

fetchTimeoutMs: 5000

1431

});

1432

1433

// Development: fail fast without fallback

1434

try {

1435

const devPrompt = await langfuse.prompt.get("test-prompt");

1436

} catch (error) {

1437

console.error("Prompt not found:", error);

1438

// Handle error explicitly

1439

}

1440

1441

// Conditional fallback

1442

async function getPromptWithFallback(name: string) {

1443

const isProd = process.env.NODE_ENV === 'production';

1444

1445

return await langfuse.prompt.get(name, {

1446

type: "text",

1447

fallback: isProd ? "Default safe prompt" : undefined,

1448

maxRetries: isProd ? 3 : 1

1449

});

1450

}

1451

```

1452

1453

### Error Handling

1454

1455

Handle various error scenarios gracefully.

1456

1457

**Prompt Not Found:**

1458

1459

```typescript

1460

try {

1461

const prompt = await langfuse.prompt.get("non-existent");

1462

} catch (error) {

1463

console.error("Prompt not found:", error.message);

1464

// Use fallback or default behavior

1465

}

1466

```

1467

1468

**Network Errors:**

1469

1470

```typescript

1471

try {

1472

const prompt = await langfuse.prompt.get("my-prompt", {

1473

fetchTimeoutMs: 1000, // 1 second timeout

1474

maxRetries: 2

1475

});

1476

} catch (error) {

1477

if (error.message.includes("timeout")) {

1478

console.error("Request timeout");

1479

} else if (error.message.includes("network")) {

1480

console.error("Network error");

1481

}

1482

// Fallback logic

1483

}

1484

```

1485

1486

**Version/Label Not Found:**

1487

1488

```typescript

1489

try {

1490

// Version doesn't exist

1491

const prompt = await langfuse.prompt.get("my-prompt", { version: 999 });

1492

} catch (error) {

1493

console.error("Version not found:", error.message);

1494

}

1495

1496

try {

1497

// Label doesn't exist

1498

const prompt = await langfuse.prompt.get("my-prompt", {

1499

label: "non-existent"

1500

});

1501

} catch (error) {

1502

console.error("Label not found:", error.message);

1503

}

1504

```

1505

1506

**Update Errors:**

1507

1508

```typescript

1509

try {

1510

await langfuse.prompt.update({

1511

name: "non-existent",

1512

version: 1,

1513

newLabels: ["production"]

1514

});

1515

} catch (error) {

1516

console.error("Update failed:", error.message);

1517

}

1518

1519

try {

1520

await langfuse.prompt.update({

1521

name: "my-prompt",

1522

version: 999, // Invalid version

1523

newLabels: ["production"]

1524

});

1525

} catch (error) {

1526

console.error("Invalid version:", error.message);

1527

}

1528

```

1529

1530

## TypeScript Support

1531

1532

Full type safety and inference for prompt operations.

1533

1534

**Type Inference:**

1535

1536

```typescript

1537

// Type inferred based on 'type' option

1538

const textPrompt = await langfuse.prompt.get("greeting", { type: "text" });

1539

// textPrompt: TextPromptClient

1540

1541

const chatPrompt = await langfuse.prompt.get("conversation", { type: "chat" });

1542

// chatPrompt: ChatPromptClient

1543

1544

// Without type specification

1545

const prompt = await langfuse.prompt.get("unknown");

1546

// prompt: TextPromptClient | ChatPromptClient

1547

```

1548

1549

**Type Guards:**

1550

1551

```typescript

1552

import { TextPromptClient, ChatPromptClient } from '@langfuse/client';

1553

1554

const prompt = await langfuse.prompt.get("unknown");

1555

1556

if (prompt instanceof TextPromptClient) {

1557

const text: string = prompt.compile({ name: "Alice" });

1558

}

1559

1560

if (prompt instanceof ChatPromptClient) {

1561

const messages = prompt.compile(

1562

{ name: "Alice" },

1563

{ history: [] }

1564

);

1565

}

1566

```

1567

1568

**Generic Types:**

1569

1570

```typescript

1571

import type {

1572

LangfusePromptClient,

1573

TextPromptClient,

1574

ChatPromptClient,

1575

ChatMessage,

1576

ChatMessageWithPlaceholders,

1577

CreatePromptRequest

1578

} from '@langfuse/client';

1579

1580

// Function with generic prompt client

1581

function processPrompt(prompt: LangfusePromptClient) {

1582

if (prompt.type === "text") {

1583

return prompt.compile({ var: "value" });

1584

} else {

1585

return prompt.compile({ var: "value" }, {});

1586

}

1587

}

1588

1589

// Type-safe prompt creation

1590

const textRequest: CreatePromptRequest.Text = {

1591

name: "test",

1592

prompt: "Hello {{name}}",

1593

labels: ["production"]

1594

};

1595

1596

const chatRequest: CreatePromptRequest.Chat = {

1597

name: "test-chat",

1598

type: "chat",

1599

prompt: [

1600

{ role: "system", content: "System" }

1601

]

1602

};

1603

```

1604

1605

## Performance Considerations

1606

1607

### Caching

1608

1609

- **Default TTL**: 60 seconds strikes balance between freshness and performance

1610

- **Production**: Use longer TTL (300-3600s) for stable prompts

1611

- **Development**: Use shorter TTL (10-30s) or disable (0) for rapid iteration

1612

- **Stale-While-Revalidate**: Returns immediately even on expired cache

1613

1614

### Concurrent Requests

1615

1616

- Multiple concurrent requests to same expired prompt trigger only one refresh

1617

- Reduces API load and improves response times

1618

- Automatic deduplication of in-flight refresh requests

1619

1620

### Compilation Performance

1621

1622

- Variable substitution is fast (Mustache rendering)

1623

- Placeholder resolution is O(n) where n is number of messages

1624

- LangChain conversion adds minimal overhead

1625

1626

### Best Practices

1627

1628

```typescript

1629

// ✅ Good: Reuse prompt client

1630

const prompt = await langfuse.prompt.get("my-prompt");

1631

const result1 = prompt.compile({ name: "Alice" });

1632

const result2 = prompt.compile({ name: "Bob" });

1633

1634

// ❌ Bad: Fetch repeatedly

1635

const result1 = (await langfuse.prompt.get("my-prompt")).compile({ name: "Alice" });

1636

const result2 = (await langfuse.prompt.get("my-prompt")).compile({ name: "Bob" });

1637

1638

// ✅ Good: Cache for appropriate duration

1639

const stablePrompt = await langfuse.prompt.get("stable", {

1640

cacheTtlSeconds: 3600 // 1 hour for stable prompts

1641

});

1642

1643

// ✅ Good: Batch operations

1644

const [prompt1, prompt2, prompt3] = await Promise.all([

1645

langfuse.prompt.get("prompt-1"),

1646

langfuse.prompt.get("prompt-2"),

1647

langfuse.prompt.get("prompt-3")

1648

]);

1649

1650

// ✅ Good: Production safety with fallback

1651

const prompt = await langfuse.prompt.get("critical", {

1652

type: "text",

1653

fallback: "Default prompt",

1654

maxRetries: 3,

1655

cacheTtlSeconds: 300

1656

});

1657

```

1658

1659

## Migration Examples

1660

1661

### From Hardcoded Prompts

1662

1663

**Before:**

1664

1665

```typescript

1666

const systemMessage = "You are a helpful assistant.";

1667

const userMessage = `Hello ${userName}! How can I help?`;

1668

```

1669

1670

**After:**

1671

1672

```typescript

1673

const prompt = await langfuse.prompt.get("greeting", { type: "chat" });

1674

const messages = prompt.compile(

1675

{ user_name: userName },

1676

{}

1677

);

1678

```

1679

1680

### From Template Strings

1681

1682

**Before:**

1683

1684

```typescript

1685

function generatePrompt(task: string, context: string) {

1686

return `Generate code for: ${task}

1687

1688

Context: ${context}`;

1689

}

1690

```

1691

1692

**After:**

1693

1694

```typescript

1695

const prompt = await langfuse.prompt.get("code-generator", { type: "text" });

1696

const generated = prompt.compile({ task, context });

1697

```

1698

1699

### From LangChain Direct Usage

1700

1701

**Before:**

1702

1703

```typescript

1704

import { ChatPromptTemplate } from '@langchain/core/prompts';

1705

1706

const template = ChatPromptTemplate.fromMessages([

1707

["system", "You are a {role} assistant"],

1708

["user", "{query}"]

1709

]);

1710

```

1711

1712

**After:**

1713

1714

```typescript

1715

const prompt = await langfuse.prompt.get("assistant", { type: "chat" });

1716

const template = ChatPromptTemplate.fromMessages(

1717

prompt.getLangchainPrompt()

1718

);

1719

// Prompt now managed in Langfuse UI with versioning

1720

```

1721