0
# Prompts
1
2
Template system for customizing LLM prompts with support for various formatting options, conditional logic, and dynamic content generation.
3
4
## Capabilities
5
6
### Base Prompt Templates
7
8
Foundation classes for creating and managing prompt templates with variable substitution and validation.
9
10
```python { .api }
11
class BasePromptTemplate:
12
"""
13
Base class for all prompt templates.
14
15
Args:
16
metadata: Template metadata and configuration
17
template_vars: Variables used in the template
18
function_mappings: Functions available for template execution
19
**kwargs: Additional template arguments
20
"""
21
def __init__(
22
self,
23
metadata=None,
24
template_vars=None,
25
function_mappings=None,
26
**kwargs
27
): ...
28
29
def format(self, **kwargs):
30
"""
31
Format template with provided variables.
32
33
Args:
34
**kwargs: Template variables for substitution
35
36
Returns:
37
str: Formatted prompt string
38
"""
39
40
def format_messages(self, **kwargs):
41
"""
42
Format template as chat messages.
43
44
Returns:
45
List[ChatMessage]: Formatted chat messages
46
"""
47
48
def get_template_vars(self):
49
"""Get list of template variables."""
50
51
def partial_format(self, **kwargs):
52
"""Partial format with subset of variables."""
53
```
54
55
### String Prompt Templates
56
57
Simple string-based prompt templates for basic prompt formatting and variable substitution.
58
59
```python { .api }
60
class PromptTemplate(BasePromptTemplate):
61
"""
62
String-based prompt template.
63
64
Args:
65
template: Template string with {variable} placeholders
66
**kwargs: BasePromptTemplate arguments
67
"""
68
def __init__(self, template, **kwargs): ...
69
70
@classmethod
71
def from_template(cls, template_str, **kwargs):
72
"""Create prompt template from string."""
73
74
# Legacy alias for backwards compatibility
75
Prompt = PromptTemplate
76
```
77
78
**String Template Usage Example:**
79
80
```python
81
from llama_index.core.prompts import PromptTemplate
82
83
# Basic template
84
qa_template = PromptTemplate(
85
template=(
86
"Context information is below:\n"
87
"---------------------\n"
88
"{context_str}\n"
89
"---------------------\n"
90
"Given the context information and not prior knowledge, "
91
"answer the query.\n"
92
"Query: {query_str}\n"
93
"Answer: "
94
)
95
)
96
97
# Format with variables
98
formatted_prompt = qa_template.format(
99
context_str="LlamaIndex is a data framework for LLM applications.",
100
query_str="What is LlamaIndex?"
101
)
102
103
print(formatted_prompt)
104
105
# Partial formatting
106
partial_template = qa_template.partial_format(
107
context_str="Predefined context information"
108
)
109
110
# Complete later
111
final_prompt = partial_template.format(query_str="User's question")
112
```
113
114
### Chat Prompt Templates
115
116
Chat-based prompt templates for multi-message conversations with role management and message formatting.
117
118
```python { .api }
119
class ChatPromptTemplate(BasePromptTemplate):
120
"""
121
Chat-based prompt template with multiple message roles.
122
123
Args:
124
message_templates: List of ChatMessage templates
125
**kwargs: BasePromptTemplate arguments
126
"""
127
def __init__(self, message_templates, **kwargs): ...
128
129
@classmethod
130
def from_messages(cls, message_templates, **kwargs):
131
"""Create chat template from message list."""
132
133
class ChatMessage:
134
"""
135
Individual chat message with role and content.
136
137
Args:
138
role: Message role ("system", "user", "assistant", "tool")
139
content: Message content string
140
additional_kwargs: Additional message parameters
141
"""
142
def __init__(
143
self,
144
role,
145
content="",
146
additional_kwargs=None
147
): ...
148
149
class MessageRole:
150
"""Message role constants."""
151
SYSTEM = "system"
152
USER = "user"
153
ASSISTANT = "assistant"
154
TOOL = "tool"
155
FUNCTION = "function"
156
```
157
158
**Chat Template Usage Example:**
159
160
```python
161
from llama_index.core.prompts import ChatPromptTemplate
162
from llama_index.core.llms import ChatMessage, MessageRole
163
164
# Create chat template
165
chat_template = ChatPromptTemplate(
166
message_templates=[
167
ChatMessage(
168
role=MessageRole.SYSTEM,
169
content="You are a helpful AI assistant specializing in {domain}."
170
),
171
ChatMessage(
172
role=MessageRole.USER,
173
content="Context: {context}\n\nQuestion: {query}"
174
)
175
]
176
)
177
178
# Format as messages
179
messages = chat_template.format_messages(
180
domain="machine learning",
181
context="Recent advances in transformer models",
182
query="What are the key innovations?"
183
)
184
185
# Use with LLM
186
response = llm.chat(messages)
187
```
188
189
### Selector Prompt Templates
190
191
Conditional prompt templates that select appropriate prompts based on query characteristics or context.
192
193
```python { .api }
194
class SelectorPromptTemplate(BasePromptTemplate):
195
"""
196
Selector prompt template for conditional prompt selection.
197
198
Args:
199
default_template: Default template to use
200
conditionals: List of (condition, template) pairs
201
**kwargs: BasePromptTemplate arguments
202
"""
203
def __init__(
204
self,
205
default_template,
206
conditionals=None,
207
**kwargs
208
): ...
209
210
def select(self, **kwargs):
211
"""Select appropriate template based on conditions."""
212
213
class ConditionalPromptSelector:
214
"""
215
Conditional selector for prompt templates.
216
217
Args:
218
default_template: Default template
219
conditionals: List of conditional templates
220
"""
221
def __init__(self, default_template, conditionals=None): ...
222
223
def select(self, **kwargs):
224
"""Select template based on conditions."""
225
```
226
227
**Selector Template Example:**
228
229
```python
230
from llama_index.core.prompts import SelectorPromptTemplate, PromptTemplate
231
232
# Different templates for different query types
233
technical_template = PromptTemplate(
234
"Technical Query: {query}\nProvide detailed technical explanation:\n"
235
)
236
237
simple_template = PromptTemplate(
238
"Simple Query: {query}\nProvide easy-to-understand answer:\n"
239
)
240
241
# Selector with conditions
242
selector_template = SelectorPromptTemplate(
243
default_template=simple_template,
244
conditionals=[
245
(lambda **kwargs: "API" in kwargs.get("query", ""), technical_template),
246
(lambda **kwargs: "code" in kwargs.get("query", ""), technical_template),
247
]
248
)
249
250
# Automatically selects appropriate template
251
prompt1 = selector_template.format(query="How does the API authentication work?")
252
prompt2 = selector_template.format(query="What is machine learning?")
253
```
254
255
### Rich Prompt Templates
256
257
Advanced prompt templates with rich formatting, structured content, and dynamic generation capabilities.
258
259
```python { .api }
260
class RichPromptTemplate(BasePromptTemplate):
261
"""
262
Rich prompt template with advanced formatting capabilities.
263
264
Args:
265
template: Rich template string with advanced placeholders
266
format_type: Format type ("markdown", "html", "plain")
267
**kwargs: BasePromptTemplate arguments
268
"""
269
def __init__(
270
self,
271
template,
272
format_type="markdown",
273
**kwargs
274
): ...
275
276
def format_rich(self, **kwargs):
277
"""Format with rich content processing."""
278
```
279
280
### Pre-built Prompt Templates
281
282
Collection of ready-to-use prompt templates for common LlamaIndex operations and use cases.
283
284
```python { .api }
285
# Question-Answering Templates
286
DEFAULT_TEXT_QA_PROMPT_TMPL = (
287
"Context information is below.\n"
288
"---------------------\n"
289
"{context_str}\n"
290
"---------------------\n"
291
"Given the context information and not prior knowledge, answer the query.\n"
292
"Query: {query_str}\n"
293
"Answer: "
294
)
295
296
DEFAULT_REFINE_PROMPT_TMPL = (
297
"The original query is as follows: {query_str}\n"
298
"We have provided an existing answer: {existing_answer}\n"
299
"We have the opportunity to refine the existing answer "
300
"(only if needed) with some more context below.\n"
301
"------------\n"
302
"{context_msg}\n"
303
"------------\n"
304
"Given the new context, refine the original answer to better answer the query. "
305
"If the context isn't useful, return the original answer.\n"
306
"Refined Answer: "
307
)
308
309
# Summary Templates
310
DEFAULT_SUMMARY_PROMPT_TMPL = (
311
"Write a summary of the following. Try to use only the "
312
"information provided. Try to include as many key details as possible.\n"
313
"\n"
314
"{context_str}\n"
315
"\n"
316
'SUMMARY:"""\n'
317
)
318
319
# Tree Templates
320
DEFAULT_TREE_SUMMARIZE_PROMPT_TMPL = (
321
"Context information from multiple sources is below.\n"
322
"---------------------\n"
323
"{context_str}\n"
324
"---------------------\n"
325
"Given the information from multiple sources and not prior knowledge, "
326
"answer the query.\n"
327
"Query: {query_str}\n"
328
"Answer: "
329
)
330
331
# Keyword Extraction Templates
332
DEFAULT_KEYWORD_EXTRACT_TEMPLATE_TMPL = (
333
"Some text is provided below. Given the text, extract up to {max_keywords} "
334
"keywords from the text. Avoid stopwords.\n"
335
"---------------------\n"
336
"{text}\n"
337
"---------------------\n"
338
"Provide keywords in the following comma-separated format: 'KEYWORDS: <keywords>'\n"
339
)
340
```
341
342
**Pre-built Template Usage:**
343
344
```python
345
from llama_index.core.prompts import (
346
DEFAULT_TEXT_QA_PROMPT_TMPL,
347
DEFAULT_REFINE_PROMPT_TMPL,
348
PromptTemplate
349
)
350
351
# Use pre-built templates
352
qa_prompt = PromptTemplate(DEFAULT_TEXT_QA_PROMPT_TMPL)
353
refine_prompt = PromptTemplate(DEFAULT_REFINE_PROMPT_TMPL)
354
355
# Customize with query engine
356
query_engine = index.as_query_engine(
357
text_qa_template=qa_prompt,
358
refine_template=refine_prompt
359
)
360
```
361
362
### Custom Prompt Creation
363
364
Framework for creating domain-specific and application-specific prompt templates.
365
366
```python { .api }
367
class CustomPromptTemplate(BasePromptTemplate):
368
"""
369
Custom prompt template with advanced features.
370
371
Args:
372
template_func: Function that generates template string
373
required_vars: Required template variables
374
optional_vars: Optional template variables
375
validation_func: Function to validate template variables
376
"""
377
def __init__(
378
self,
379
template_func,
380
required_vars=None,
381
optional_vars=None,
382
validation_func=None,
383
**kwargs
384
): ...
385
386
def format(self, **kwargs):
387
"""Format using custom template function."""
388
if self.validation_func:
389
self.validation_func(**kwargs)
390
391
return self.template_func(**kwargs)
392
```
393
394
**Custom Template Example:**
395
396
```python
397
from llama_index.core.prompts import BasePromptTemplate
398
399
class CodeAnalysisPrompt(BasePromptTemplate):
400
"""Custom prompt for code analysis tasks."""
401
402
def __init__(self, language="python", **kwargs):
403
self.language = language
404
super().__init__(**kwargs)
405
406
def format(self, code, question, **kwargs):
407
"""Format code analysis prompt."""
408
return f"""
409
Analyze the following {self.language} code:
410
411
```{self.language}
412
{code}
413
```
414
415
Question: {question}
416
417
Provide a detailed analysis including:
418
1. Code functionality
419
2. Potential issues
420
3. Improvement suggestions
421
4. Best practices
422
423
Analysis:
424
"""
425
426
# Use custom prompt
427
code_prompt = CodeAnalysisPrompt(language="python")
428
429
formatted = code_prompt.format(
430
code="def factorial(n): return 1 if n <= 1 else n * factorial(n-1)",
431
question="Is this implementation efficient?"
432
)
433
```
434
435
### Dynamic Prompt Generation
436
437
Advanced prompt generation with context-aware content and adaptive formatting.
438
439
```python { .api }
440
class DynamicPromptTemplate(BasePromptTemplate):
441
"""
442
Dynamic prompt template with context-aware generation.
443
444
Args:
445
base_template: Base template string
446
dynamic_sections: Dictionary of dynamic content generators
447
context_analyzer: Function to analyze context for adaptation
448
"""
449
def __init__(
450
self,
451
base_template,
452
dynamic_sections=None,
453
context_analyzer=None,
454
**kwargs
455
): ...
456
457
def format(self, **kwargs):
458
"""Format with dynamic content generation."""
459
# Analyze context
460
context_info = self.context_analyzer(**kwargs) if self.context_analyzer else {}
461
462
# Generate dynamic sections
463
dynamic_content = {}
464
for section_name, generator in self.dynamic_sections.items():
465
dynamic_content[section_name] = generator(context_info, **kwargs)
466
467
# Format final template
468
return self.base_template.format(**kwargs, **dynamic_content)
469
```
470
471
### Prompt Optimization and Testing
472
473
Tools for optimizing prompt performance and testing template variations.
474
475
```python { .api }
476
class PromptOptimizer:
477
"""
478
Prompt optimization utilities.
479
480
Args:
481
evaluation_fn: Function to evaluate prompt performance
482
templates: List of template variations to test
483
"""
484
def __init__(self, evaluation_fn, templates): ...
485
486
def optimize(self, test_queries, **kwargs):
487
"""Find optimal template through evaluation."""
488
489
def ab_test(self, template_a, template_b, test_data):
490
"""A/B test two template variations."""
491
492
class PromptValidator:
493
"""Validate prompt templates for common issues."""
494
495
def validate_variables(self, template, required_vars):
496
"""Validate template has required variables."""
497
498
def check_length(self, template, max_length=None):
499
"""Check template length constraints."""
500
501
def analyze_clarity(self, template):
502
"""Analyze template clarity and readability."""
503
```
504
505
**Prompt Optimization Example:**
506
507
```python
508
from llama_index.core.prompts import PromptTemplate
509
510
# Create template variations
511
templates = [
512
PromptTemplate("Answer: {query}"),
513
PromptTemplate("Based on context: {context}\nQ: {query}\nA:"),
514
PromptTemplate("Context: {context}\nQuestion: {query}\nDetailed Answer:")
515
]
516
517
def evaluate_template(template, test_cases):
518
"""Evaluate template performance."""
519
scores = []
520
for case in test_cases:
521
# Format template
522
prompt = template.format(**case)
523
524
# Generate response and evaluate
525
response = llm.complete(prompt)
526
score = evaluate_response(response.text, case["expected"])
527
scores.append(score)
528
529
return sum(scores) / len(scores)
530
531
# Find best template
532
best_template = None
533
best_score = 0
534
535
for template in templates:
536
score = evaluate_template(template, test_cases)
537
if score > best_score:
538
best_score = score
539
best_template = template
540
541
print(f"Best template score: {best_score}")
542
```
543
544
### Multi-Language and Localization
545
546
Support for multi-language prompts and localized content generation.
547
548
```python { .api }
549
class MultiLanguagePromptTemplate(BasePromptTemplate):
550
"""
551
Multi-language prompt template with localization support.
552
553
Args:
554
templates: Dictionary mapping language codes to templates
555
default_language: Default language if not specified
556
**kwargs: BasePromptTemplate arguments
557
"""
558
def __init__(
559
self,
560
templates,
561
default_language="en",
562
**kwargs
563
): ...
564
565
def format(self, language=None, **kwargs):
566
"""Format template in specified language."""
567
lang = language or self.default_language
568
template = self.templates.get(lang, self.templates[self.default_language])
569
return template.format(**kwargs)
570
```
571
572
**Multi-Language Example:**
573
574
```python
575
from llama_index.core.prompts import MultiLanguagePromptTemplate, PromptTemplate
576
577
# Multi-language templates
578
ml_template = MultiLanguagePromptTemplate(
579
templates={
580
"en": PromptTemplate("Question: {query}\nAnswer:"),
581
"es": PromptTemplate("Pregunta: {query}\nRespuesta:"),
582
"fr": PromptTemplate("Question: {query}\nRéponse:"),
583
"de": PromptTemplate("Frage: {query}\nAntwort:")
584
},
585
default_language="en"
586
)
587
588
# Format in different languages
589
english_prompt = ml_template.format(query="What is AI?", language="en")
590
spanish_prompt = ml_template.format(query="¿Qué es la IA?", language="es")
591
```
592
593
### Integration with Query Engines
594
595
Seamless integration patterns for using custom prompts with LlamaIndex query engines and retrievers.
596
597
```python { .api }
598
# Update prompts in existing query engines
599
query_engine.update_prompts({
600
"response_synthesizer:text_qa_template": custom_qa_template,
601
"response_synthesizer:refine_template": custom_refine_template
602
})
603
604
# Get current prompts
605
current_prompts = query_engine.get_prompts()
606
607
# Use custom prompts during creation
608
query_engine = index.as_query_engine(
609
text_qa_template=custom_qa_template,
610
refine_template=custom_refine_template
611
)
612
```
613
614
**Query Engine Integration Example:**
615
616
```python
617
from llama_index.core.prompts import PromptTemplate
618
619
# Create domain-specific prompt
620
medical_qa_template = PromptTemplate(
621
"You are a medical AI assistant. Based on the medical literature below:\n"
622
"---------------------\n"
623
"{context_str}\n"
624
"---------------------\n"
625
"Answer the medical question. Always include disclaimers about consulting healthcare professionals.\n"
626
"Question: {query_str}\n"
627
"Medical Response: "
628
)
629
630
# Create specialized query engine
631
medical_query_engine = index.as_query_engine(
632
text_qa_template=medical_qa_template,
633
similarity_top_k=5
634
)
635
636
# Use for medical queries
637
response = medical_query_engine.query("What are the symptoms of diabetes?")
638
```