or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

agents-tools.mddocuments-nodes.mdevaluation.mdindex.mdindices.mdllms-embeddings.mdnode-parsers.mdpostprocessors.mdprompts.mdquery-engines.mdretrievers.mdsettings.mdstorage.md
tile.json

tessl/pypi-llama-index-core

Interface between LLMs and your data

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/llama-index-core@0.13.x

To install, run

npx @tessl/cli install tessl/pypi-llama-index-core@0.13.0

index.mddocs/

LlamaIndex Core

LlamaIndex Core provides the foundational framework for building LLM applications, particularly RAG (Retrieval-Augmented Generation) systems. It includes essential abstractions for LLMs, Vector Stores, Embeddings, Storage, and Callables that serve as building blocks for data-driven LLM applications. The core library is designed for extensibility through subclasses and works seamlessly with over 300 LlamaIndex integration packages.

Package Information

  • Package Name: llama-index-core
  • Language: Python
  • Installation: pip install llama-index-core
  • Version: 0.13.4
  • License: MIT

Core Imports

import llama_index.core

Common imports for building RAG applications:

from llama_index.core import VectorStoreIndex, Document, ServiceContext, StorageContext
from llama_index.core import Settings
from llama_index.core.llms import LLM
from llama_index.core.embeddings import BaseEmbedding

Basic Usage

from llama_index.core import VectorStoreIndex, Document, Settings
from llama_index.core.llms import MockLLM
from llama_index.core.embeddings import MockEmbedding

# Configure global settings
Settings.llm = MockLLM()
Settings.embed_model = MockEmbedding(embed_dim=384)

# Create documents
documents = [
    Document(text="This is a sample document about machine learning."),
    Document(text="LlamaIndex helps build RAG applications with LLMs."),
    Document(text="Vector stores enable semantic search over documents.")
]

# Create vector store index
index = VectorStoreIndex.from_documents(documents)

# Query the index
query_engine = index.as_query_engine()
response = query_engine.query("What is LlamaIndex?")
print(response.response)

# Use as retriever
retriever = index.as_retriever(similarity_top_k=2)
nodes = retriever.retrieve("machine learning")
for node in nodes:
    print(f"Score: {node.score}, Text: {node.text}")

Architecture

LlamaIndex Core follows a modular architecture built around several key abstractions:

  • Documents & Nodes: Core data structures representing textual content with metadata
  • Indices: Storage and retrieval abstractions (Vector, Tree, Keyword, Graph-based)
  • LLMs & Embeddings: Pluggable language model and embedding interfaces
  • Query Engines: High-level interfaces for question-answering over data
  • Retrievers: Components for finding relevant information from indices
  • Settings: Global configuration system for LLMs, embeddings, and other components

This design enables composable RAG applications where developers can mix and match components based on their specific requirements, while the extensive integration ecosystem provides concrete implementations for popular services.

Capabilities

Document Processing & Node Management

Core data structures and utilities for handling documents, creating nodes, and managing textual content with metadata and relationships.

class Document:
    def __init__(self, text: str, metadata: Optional[dict] = None, **kwargs): ...

class TextNode:
    def __init__(self, text: str, metadata: Optional[dict] = None, **kwargs): ...

class NodeWithScore:
    def __init__(self, node: BaseNode, score: Optional[float] = None): ...

Documents & Nodes

Index Construction & Management

Multiple index types for different data organization patterns, including vector-based semantic search, tree structures, keyword tables, and knowledge graphs.

class VectorStoreIndex:
    @classmethod
    def from_documents(cls, documents: Sequence[Document], **kwargs) -> "VectorStoreIndex": ...
    def as_query_engine(self, **kwargs) -> BaseQueryEngine: ...
    def as_retriever(self, **kwargs) -> BaseRetriever: ...

class TreeIndex:
    @classmethod 
    def from_documents(cls, documents: Sequence[Document], **kwargs) -> "TreeIndex": ...

class KeywordTableIndex:
    @classmethod
    def from_documents(cls, documents: Sequence[Document], **kwargs) -> "KeywordTableIndex": ...

Indices

Query Engines & Question Answering

High-level interfaces for question-answering, including basic retrieval engines, multi-step reasoning, routing, and specialized SQL/pandas query engines.

class BaseQueryEngine:
    def query(self, str_or_query_bundle: Union[str, QueryBundle]) -> RESPONSE_TYPE: ...

class RetrieverQueryEngine:
    def __init__(self, retriever: BaseRetriever, response_synthesizer: Optional[BaseSynthesizer] = None): ...

def get_response_synthesizer(**kwargs) -> BaseSynthesizer: ...

Query Engines

Retrieval Systems

Components for finding and ranking relevant information from indices, including vector similarity, keyword matching, and advanced retrieval strategies.

class BaseRetriever:
    def retrieve(self, str_or_query_bundle: Union[str, QueryBundle]) -> List[NodeWithScore]: ...

class VectorIndexRetriever:
    def __init__(self, index: VectorStoreIndex, similarity_top_k: int = 10, **kwargs): ...

class RecursiveRetriever:
    def __init__(self, root_id: str, retriever_dict: Dict[str, BaseRetriever]): ...

Retrievers

LLM & Embedding Interfaces

Pluggable interfaces for language models and embedding systems, supporting both synchronous and asynchronous operations with extensive customization options.

class LLM:
    def complete(self, prompt: str, **kwargs) -> CompletionResponse: ...
    def chat(self, messages: Sequence[ChatMessage], **kwargs) -> ChatResponse: ...

class BaseEmbedding:
    def get_text_embedding(self, text: str) -> List[float]: ...
    def get_text_embeddings(self, texts: List[str]) -> List[List[float]]: ...

class MockLLM(CustomLLM): ...
class MockEmbedding(BaseEmbedding): ...

LLMs & Embeddings

Text Processing & Node Parsing

Comprehensive text splitting, parsing, and preprocessing capabilities including sentence splitting, semantic chunking, code-aware parsing, and hierarchical document processing.

class SentenceSplitter:
    def __init__(self, chunk_size: int = 1024, chunk_overlap: int = 20, **kwargs): ...
    def split_text(self, text: str) -> List[str]: ...

class SemanticSplitterNodeParser:
    def __init__(self, embed_model: Optional[BaseEmbedding] = None, **kwargs): ...

class MarkdownNodeParser:
    def get_nodes_from_documents(self, documents: Sequence[Document]) -> List[BaseNode]: ...

Node Parsers

Response Processing & Postprocessing

Components for processing and refining retrieved results, including similarity filtering, reranking, metadata replacement, and recency scoring.

class SimilarityPostprocessor:
    def __init__(self, similarity_cutoff: Optional[float] = None): ...

class LLMRerank:
    def __init__(self, llm: Optional[LLM] = None, top_n: int = 10): ...

class PrevNextNodePostprocessor:
    def __init__(self, docstore: BaseDocumentStore, num_nodes: int = 1): ...

Postprocessors

Prompt Templates & Management

Flexible prompt templating system supporting chat templates, conditional prompts, and integration with various LLM formats.

class PromptTemplate:
    def __init__(self, template: str, **kwargs): ...
    def format(self, **kwargs) -> str: ...

class ChatPromptTemplate:
    def __init__(self, message_templates: List[ChatMessage]): ...

class SelectorPromptTemplate:
    def __init__(self, default_template: BasePromptTemplate, conditionals: List[Tuple[Callable, BasePromptTemplate]]): ...

Prompts

Storage & Persistence

Storage abstractions and context management for persisting indices, documents, and vector stores with support for various backends.

class StorageContext:
    @classmethod
    def from_defaults(cls, **kwargs) -> "StorageContext": ...
    
def load_index_from_storage(storage_context: StorageContext, **kwargs) -> BaseIndex: ...
def load_indices_from_storage(storage_context: StorageContext, **kwargs) -> List[BaseIndex]: ...

Storage

Agent Framework & Tools

Agent implementations supporting ReAct reasoning, function calling, and workflow orchestration with comprehensive tool integration.

class ReActAgent:
    def __init__(self, tools: List[BaseTool], llm: LLM, **kwargs): ...
    def chat(self, message: str, **kwargs) -> AgentChatResponse: ...

class FunctionTool:
    @classmethod
    def from_defaults(cls, fn: Callable, **kwargs) -> "FunctionTool": ...

class QueryEngineTool:
    def __init__(self, query_engine: BaseQueryEngine, metadata: ToolMetadata): ...

Agents & Tools

Evaluation Framework

Comprehensive evaluation capabilities for RAG systems including retrieval metrics, response quality assessment, and dataset generation.

class RetrieverEvaluator:
    def __init__(self, metrics: Optional[List[BaseMetric]] = None): ...

class FaithfulnessEvaluator:
    def __init__(self, llm: Optional[LLM] = None): ...

class HitRate:
    def compute(self, query: str, expected_ids: List[str], retrieved_ids: List[str]) -> RetrievalMetricResult: ...

Evaluation

Global Configuration

Centralized configuration system for managing LLMs, embeddings, callback handlers, and other global settings across the application.

class Settings:
    llm: Optional[LLM] = None
    embed_model: Optional[BaseEmbedding] = None
    callback_manager: Optional[CallbackManager] = None
    
def set_global_service_context(service_context: ServiceContext) -> None: ...
def set_global_handler(handler: BaseCallbackHandler) -> None: ...

Settings & Configuration

Core Types

class QueryBundle:
    def __init__(self, query_str: str, embedding: Optional[List[float]] = None, **kwargs): ...

class NodeRelationship(str, Enum):
    SOURCE = "SOURCE"
    PREVIOUS = "PREVIOUS"
    NEXT = "NEXT"
    PARENT = "PARENT"
    CHILD = "CHILD"

class MetadataMode(str, Enum):
    ALL = "all"
    EMBED = "embed"
    LLM = "llm"
    NONE = "none"

Response = Union[str, ChatResponse, CompletionResponse]
RESPONSE_TYPE = Union[Response, StreamingResponse]