CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-llama-index

Interface between LLMs and your data for building retrieval-augmented generation (RAG) applications

Pending
Overview
Eval results
Files

prompts.mddocs/

Prompts

Template system for customizing LLM prompts with support for various formatting options, conditional logic, and dynamic content generation.

Capabilities

Base Prompt Templates

Foundation classes for creating and managing prompt templates with variable substitution and validation.

class BasePromptTemplate:
    """
    Base class for all prompt templates.
    
    Args:
        metadata: Template metadata and configuration
        template_vars: Variables used in the template
        function_mappings: Functions available for template execution
        **kwargs: Additional template arguments
    """
    def __init__(
        self,
        metadata=None,
        template_vars=None,
        function_mappings=None,
        **kwargs
    ): ...
    
    def format(self, **kwargs):
        """
        Format template with provided variables.
        
        Args:
            **kwargs: Template variables for substitution
            
        Returns:
            str: Formatted prompt string
        """
    
    def format_messages(self, **kwargs):
        """
        Format template as chat messages.
        
        Returns:
            List[ChatMessage]: Formatted chat messages
        """
    
    def get_template_vars(self):
        """Get list of template variables."""
    
    def partial_format(self, **kwargs):
        """Partial format with subset of variables."""

String Prompt Templates

Simple string-based prompt templates for basic prompt formatting and variable substitution.

class PromptTemplate(BasePromptTemplate):
    """
    String-based prompt template.
    
    Args:
        template: Template string with {variable} placeholders
        **kwargs: BasePromptTemplate arguments
    """
    def __init__(self, template, **kwargs): ...
    
    @classmethod
    def from_template(cls, template_str, **kwargs):
        """Create prompt template from string."""

# Legacy alias for backwards compatibility
Prompt = PromptTemplate

String Template Usage Example:

from llama_index.core.prompts import PromptTemplate

# Basic template
qa_template = PromptTemplate(
    template=(
        "Context information is below:\n"
        "---------------------\n"
        "{context_str}\n"
        "---------------------\n"
        "Given the context information and not prior knowledge, "
        "answer the query.\n"
        "Query: {query_str}\n"
        "Answer: "
    )
)

# Format with variables
formatted_prompt = qa_template.format(
    context_str="LlamaIndex is a data framework for LLM applications.",
    query_str="What is LlamaIndex?"
)

print(formatted_prompt)

# Partial formatting
partial_template = qa_template.partial_format(
    context_str="Predefined context information"
)

# Complete later
final_prompt = partial_template.format(query_str="User's question")

Chat Prompt Templates

Chat-based prompt templates for multi-message conversations with role management and message formatting.

class ChatPromptTemplate(BasePromptTemplate):
    """
    Chat-based prompt template with multiple message roles.
    
    Args:
        message_templates: List of ChatMessage templates
        **kwargs: BasePromptTemplate arguments
    """
    def __init__(self, message_templates, **kwargs): ...
    
    @classmethod
    def from_messages(cls, message_templates, **kwargs):
        """Create chat template from message list."""

class ChatMessage:
    """
    Individual chat message with role and content.
    
    Args:
        role: Message role ("system", "user", "assistant", "tool")
        content: Message content string
        additional_kwargs: Additional message parameters
    """
    def __init__(
        self,
        role,
        content="",
        additional_kwargs=None
    ): ...

class MessageRole:
    """Message role constants."""
    SYSTEM = "system"
    USER = "user"
    ASSISTANT = "assistant"
    TOOL = "tool"
    FUNCTION = "function"

Chat Template Usage Example:

from llama_index.core.prompts import ChatPromptTemplate
from llama_index.core.llms import ChatMessage, MessageRole

# Create chat template
chat_template = ChatPromptTemplate(
    message_templates=[
        ChatMessage(
            role=MessageRole.SYSTEM,
            content="You are a helpful AI assistant specializing in {domain}."
        ),
        ChatMessage(
            role=MessageRole.USER,
            content="Context: {context}\n\nQuestion: {query}"
        )
    ]
)

# Format as messages
messages = chat_template.format_messages(
    domain="machine learning",
    context="Recent advances in transformer models",
    query="What are the key innovations?"
)

# Use with LLM
response = llm.chat(messages)

Selector Prompt Templates

Conditional prompt templates that select appropriate prompts based on query characteristics or context.

class SelectorPromptTemplate(BasePromptTemplate):
    """
    Selector prompt template for conditional prompt selection.
    
    Args:
        default_template: Default template to use
        conditionals: List of (condition, template) pairs
        **kwargs: BasePromptTemplate arguments
    """
    def __init__(
        self,
        default_template,
        conditionals=None,
        **kwargs
    ): ...
    
    def select(self, **kwargs):
        """Select appropriate template based on conditions."""

class ConditionalPromptSelector:
    """
    Conditional selector for prompt templates.
    
    Args:
        default_template: Default template
        conditionals: List of conditional templates
    """
    def __init__(self, default_template, conditionals=None): ...
    
    def select(self, **kwargs):
        """Select template based on conditions."""

Selector Template Example:

from llama_index.core.prompts import SelectorPromptTemplate, PromptTemplate

# Different templates for different query types
technical_template = PromptTemplate(
    "Technical Query: {query}\nProvide detailed technical explanation:\n"
)

simple_template = PromptTemplate(
    "Simple Query: {query}\nProvide easy-to-understand answer:\n"
)

# Selector with conditions
selector_template = SelectorPromptTemplate(
    default_template=simple_template,
    conditionals=[
        (lambda **kwargs: "API" in kwargs.get("query", ""), technical_template),
        (lambda **kwargs: "code" in kwargs.get("query", ""), technical_template),
    ]
)

# Automatically selects appropriate template
prompt1 = selector_template.format(query="How does the API authentication work?")
prompt2 = selector_template.format(query="What is machine learning?")

Rich Prompt Templates

Advanced prompt templates with rich formatting, structured content, and dynamic generation capabilities.

class RichPromptTemplate(BasePromptTemplate):
    """
    Rich prompt template with advanced formatting capabilities.
    
    Args:
        template: Rich template string with advanced placeholders
        format_type: Format type ("markdown", "html", "plain")
        **kwargs: BasePromptTemplate arguments
    """
    def __init__(
        self,
        template,
        format_type="markdown",
        **kwargs
    ): ...
    
    def format_rich(self, **kwargs):
        """Format with rich content processing."""

Pre-built Prompt Templates

Collection of ready-to-use prompt templates for common LlamaIndex operations and use cases.

# Question-Answering Templates
DEFAULT_TEXT_QA_PROMPT_TMPL = (
    "Context information is below.\n"
    "---------------------\n"
    "{context_str}\n"
    "---------------------\n"
    "Given the context information and not prior knowledge, answer the query.\n"
    "Query: {query_str}\n"
    "Answer: "
)

DEFAULT_REFINE_PROMPT_TMPL = (
    "The original query is as follows: {query_str}\n"
    "We have provided an existing answer: {existing_answer}\n"
    "We have the opportunity to refine the existing answer "
    "(only if needed) with some more context below.\n"
    "------------\n"
    "{context_msg}\n"
    "------------\n"
    "Given the new context, refine the original answer to better answer the query. "
    "If the context isn't useful, return the original answer.\n"
    "Refined Answer: "
)

# Summary Templates
DEFAULT_SUMMARY_PROMPT_TMPL = (
    "Write a summary of the following. Try to use only the "
    "information provided. Try to include as many key details as possible.\n"
    "\n"
    "{context_str}\n"
    "\n"
    'SUMMARY:"""\n'
)

# Tree Templates
DEFAULT_TREE_SUMMARIZE_PROMPT_TMPL = (
    "Context information from multiple sources is below.\n"
    "---------------------\n"
    "{context_str}\n"
    "---------------------\n"
    "Given the information from multiple sources and not prior knowledge, "
    "answer the query.\n"
    "Query: {query_str}\n"
    "Answer: "
)

# Keyword Extraction Templates
DEFAULT_KEYWORD_EXTRACT_TEMPLATE_TMPL = (
    "Some text is provided below. Given the text, extract up to {max_keywords} "
    "keywords from the text. Avoid stopwords.\n"
    "---------------------\n"
    "{text}\n"
    "---------------------\n"
    "Provide keywords in the following comma-separated format: 'KEYWORDS: <keywords>'\n"
)

Pre-built Template Usage:

from llama_index.core.prompts import (
    DEFAULT_TEXT_QA_PROMPT_TMPL,
    DEFAULT_REFINE_PROMPT_TMPL,
    PromptTemplate
)

# Use pre-built templates
qa_prompt = PromptTemplate(DEFAULT_TEXT_QA_PROMPT_TMPL)
refine_prompt = PromptTemplate(DEFAULT_REFINE_PROMPT_TMPL)

# Customize with query engine
query_engine = index.as_query_engine(
    text_qa_template=qa_prompt,
    refine_template=refine_prompt
)

Custom Prompt Creation

Framework for creating domain-specific and application-specific prompt templates.

class CustomPromptTemplate(BasePromptTemplate):
    """
    Custom prompt template with advanced features.
    
    Args:
        template_func: Function that generates template string
        required_vars: Required template variables
        optional_vars: Optional template variables
        validation_func: Function to validate template variables
    """
    def __init__(
        self,
        template_func,
        required_vars=None,
        optional_vars=None,
        validation_func=None,
        **kwargs
    ): ...
    
    def format(self, **kwargs):
        """Format using custom template function."""
        if self.validation_func:
            self.validation_func(**kwargs)
        
        return self.template_func(**kwargs)

Custom Template Example:

from llama_index.core.prompts import BasePromptTemplate

class CodeAnalysisPrompt(BasePromptTemplate):
    """Custom prompt for code analysis tasks."""
    
    def __init__(self, language="python", **kwargs):
        self.language = language
        super().__init__(**kwargs)
    
    def format(self, code, question, **kwargs):
        """Format code analysis prompt."""
        return f"""
Analyze the following {self.language} code:

```{self.language}
{code}

Question: {question}

Provide a detailed analysis including:

  1. Code functionality
  2. Potential issues
  3. Improvement suggestions
  4. Best practices

Analysis: """

Use custom prompt

code_prompt = CodeAnalysisPrompt(language="python")

formatted = code_prompt.format( code="def factorial(n): return 1 if n <= 1 else n * factorial(n-1)", question="Is this implementation efficient?" )

### Dynamic Prompt Generation

Advanced prompt generation with context-aware content and adaptive formatting.

```python { .api }
class DynamicPromptTemplate(BasePromptTemplate):
    """
    Dynamic prompt template with context-aware generation.
    
    Args:
        base_template: Base template string
        dynamic_sections: Dictionary of dynamic content generators
        context_analyzer: Function to analyze context for adaptation
    """
    def __init__(
        self,
        base_template,
        dynamic_sections=None,
        context_analyzer=None,
        **kwargs
    ): ...
    
    def format(self, **kwargs):
        """Format with dynamic content generation."""
        # Analyze context
        context_info = self.context_analyzer(**kwargs) if self.context_analyzer else {}
        
        # Generate dynamic sections
        dynamic_content = {}
        for section_name, generator in self.dynamic_sections.items():
            dynamic_content[section_name] = generator(context_info, **kwargs)
        
        # Format final template
        return self.base_template.format(**kwargs, **dynamic_content)

Prompt Optimization and Testing

Tools for optimizing prompt performance and testing template variations.

class PromptOptimizer:
    """
    Prompt optimization utilities.
    
    Args:
        evaluation_fn: Function to evaluate prompt performance
        templates: List of template variations to test
    """
    def __init__(self, evaluation_fn, templates): ...
    
    def optimize(self, test_queries, **kwargs):
        """Find optimal template through evaluation."""
    
    def ab_test(self, template_a, template_b, test_data):
        """A/B test two template variations."""

class PromptValidator:
    """Validate prompt templates for common issues."""
    
    def validate_variables(self, template, required_vars):
        """Validate template has required variables."""
    
    def check_length(self, template, max_length=None):
        """Check template length constraints."""
    
    def analyze_clarity(self, template):
        """Analyze template clarity and readability."""

Prompt Optimization Example:

from llama_index.core.prompts import PromptTemplate

# Create template variations
templates = [
    PromptTemplate("Answer: {query}"),
    PromptTemplate("Based on context: {context}\nQ: {query}\nA:"),
    PromptTemplate("Context: {context}\nQuestion: {query}\nDetailed Answer:")
]

def evaluate_template(template, test_cases):
    """Evaluate template performance."""
    scores = []
    for case in test_cases:
        # Format template
        prompt = template.format(**case)
        
        # Generate response and evaluate
        response = llm.complete(prompt)
        score = evaluate_response(response.text, case["expected"])
        scores.append(score)
    
    return sum(scores) / len(scores)

# Find best template
best_template = None
best_score = 0

for template in templates:
    score = evaluate_template(template, test_cases)
    if score > best_score:
        best_score = score
        best_template = template

print(f"Best template score: {best_score}")

Multi-Language and Localization

Support for multi-language prompts and localized content generation.

class MultiLanguagePromptTemplate(BasePromptTemplate):
    """
    Multi-language prompt template with localization support.
    
    Args:
        templates: Dictionary mapping language codes to templates
        default_language: Default language if not specified
        **kwargs: BasePromptTemplate arguments
    """
    def __init__(
        self,
        templates,
        default_language="en",
        **kwargs
    ): ...
    
    def format(self, language=None, **kwargs):
        """Format template in specified language."""
        lang = language or self.default_language
        template = self.templates.get(lang, self.templates[self.default_language])
        return template.format(**kwargs)

Multi-Language Example:

from llama_index.core.prompts import MultiLanguagePromptTemplate, PromptTemplate

# Multi-language templates
ml_template = MultiLanguagePromptTemplate(
    templates={
        "en": PromptTemplate("Question: {query}\nAnswer:"),
        "es": PromptTemplate("Pregunta: {query}\nRespuesta:"),
        "fr": PromptTemplate("Question: {query}\nRéponse:"),
        "de": PromptTemplate("Frage: {query}\nAntwort:")
    },
    default_language="en"
)

# Format in different languages
english_prompt = ml_template.format(query="What is AI?", language="en")
spanish_prompt = ml_template.format(query="¿Qué es la IA?", language="es")

Integration with Query Engines

Seamless integration patterns for using custom prompts with LlamaIndex query engines and retrievers.

# Update prompts in existing query engines
query_engine.update_prompts({
    "response_synthesizer:text_qa_template": custom_qa_template,
    "response_synthesizer:refine_template": custom_refine_template
})

# Get current prompts
current_prompts = query_engine.get_prompts()

# Use custom prompts during creation
query_engine = index.as_query_engine(
    text_qa_template=custom_qa_template,
    refine_template=custom_refine_template
)

Query Engine Integration Example:

from llama_index.core.prompts import PromptTemplate

# Create domain-specific prompt
medical_qa_template = PromptTemplate(
    "You are a medical AI assistant. Based on the medical literature below:\n"
    "---------------------\n"
    "{context_str}\n"
    "---------------------\n"
    "Answer the medical question. Always include disclaimers about consulting healthcare professionals.\n"
    "Question: {query_str}\n"
    "Medical Response: "
)

# Create specialized query engine
medical_query_engine = index.as_query_engine(
    text_qa_template=medical_qa_template,
    similarity_top_k=5
)

# Use for medical queries
response = medical_query_engine.query("What are the symptoms of diabetes?")

Install with Tessl CLI

npx tessl i tessl/pypi-llama-index

docs

agents-workflows.md

data-indexing.md

document-processing.md

index.md

llm-integration.md

prompts.md

query-processing.md

response-synthesis.md

retrievers.md

storage-settings.md

tile.json