or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

configuration.mdembeddings.mdindex.mdmodels-and-conversations.mdplugins.mdtemplates.mdtools-and-toolboxes.md
tile.json

tessl/pypi-llm

CLI utility and Python library for interacting with Large Language Models from multiple providers including OpenAI, Anthropic, Google, and Meta plus locally installed models.

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/llm@0.27.x

To install, run

npx @tessl/cli install tessl/pypi-llm@0.27.0

index.mddocs/

LLM

A comprehensive CLI utility and Python library for interacting with Large Language Models from multiple providers including OpenAI, Anthropic's Claude, Google's Gemini, and Meta's Llama, supporting both remote APIs and locally installed models. The package provides extensive functionality for executing prompts, storing conversations, generating embeddings, extracting structured content, and enabling models to execute tools through an extensible plugin system.

Package Information

  • Package Name: llm
  • Language: Python
  • Installation: pip install llm
  • Documentation: https://llm.datasette.io/
  • Repository: https://github.com/simonw/llm

Core Imports

import llm

Common imports for library usage:

from llm import get_model, get_async_model, Conversation, Response
from llm import AsyncModel, EmbeddingModel

For embedding operations:

from llm import Collection, get_embedding_model

For tools and templates:

from llm import Tool, Toolbox, Template

Basic Usage

Simple Model Interaction

import llm

# Get a model (defaults to gpt-4o-mini)
model = llm.get_model()

# Send a prompt and get response
response = model.prompt("What is the capital of France?")
print(response.text())

Conversation with Tools

import llm

# Define a simple tool
def get_weather(location: str) -> str:
    """Get weather information for a location."""
    return f"The weather in {location} is sunny and 75°F"

# Create tool and model
weather_tool = llm.Tool.function(get_weather)
model = llm.get_model("gpt-4")

# Have conversation with tool access
conversation = model.conversation()
response = conversation.prompt("What's the weather like in Paris?", tools=[weather_tool])
print(response.text())

Working with Embeddings

import llm

# Get embedding model and create collection
embedding_model = llm.get_embedding_model("ada-002")
collection = llm.Collection("documents", embedding_model)

# Add documents
collection.embed("doc1", "Paris is the capital of France")
collection.embed("doc2", "London is the capital of England")

# Find similar documents
results = collection.similar("French capital city")
for entry in results:
    print(f"{entry.id}: {entry.content} (score: {entry.score})")

Architecture

The LLM package is built around several key architectural components:

Model Hierarchy

  • Model Classes: Abstract base classes for sync (Model, KeyModel) and async (AsyncModel, AsyncKeyModel) implementations
  • Discovery System: Plugin-based model registration and alias management
  • Response System: Streaming and non-streaming response handling with tool integration

Conversation Management

  • Conversation Objects: Stateful conversation containers that maintain history and context
  • Prompt System: Rich prompt objects supporting attachments, tools, and structured schemas
  • Tool Integration: Function calling with automatic schema generation and chaining

Plugin Ecosystem

  • Hook System: Pluggy-based hook specifications for extensibility
  • Model Registration: Plugin-provided models and embedding models
  • Tool Registration: Plugin-provided tools and toolboxes
  • Template/Fragment Loaders: Plugin-provided content processors

Data Storage

  • SQLite Backend: Conversation history, embeddings, and metadata storage
  • Configuration Management: User directory with keys, aliases, and settings
  • Migration System: Database schema evolution support

This architecture enables the package to serve as both a standalone CLI tool and a comprehensive Python library while maintaining extensibility through its plugin system.

Capabilities

Models and Conversations

Core model management, conversation handling, prompt processing, and response streaming. Supports both synchronous and asynchronous operations with comprehensive error handling.

def get_model(name: Optional[str] = None) -> Model
def get_async_model(name: Optional[str] = None) -> AsyncModel
class Conversation:
    def prompt(self, prompt, **kwargs) -> Response
class Response:
    def text() -> str
    def __iter__() -> Iterator[str]

Models and Conversations

Tools and Toolboxes

Function calling system with automatic schema generation, tool chaining, and error handling. Supports both individual tools and organized toolbox collections.

class Tool:
    @classmethod
    def function(cls, function, name=None, description=None) -> Tool
class Toolbox:
    def tools(self) -> Iterable[Tool]
class ToolCall:
    function: str
    arguments: dict

Tools and Toolboxes

Embeddings and Vector Operations

Vector database operations with similarity search, metadata storage, and efficient batch processing. Supports multiple embedding models and custom similarity metrics.

class Collection:
    def embed(self, id: str, value: Union[str, bytes], metadata=None, store=False)
    def similar(self, value, number=10) -> List[Entry]
    def similar_by_id(self, id: str, number=10) -> List[Entry]
def cosine_similarity(a: List[float], b: List[float]) -> float

Embeddings

Templates and Fragments

Prompt template system with variable substitution, attachment handling, and fragment management for reusable prompt components.

class Template:
    def evaluate(self, input: str, params=None) -> Tuple[Optional[str], Optional[str]]
    def vars(self) -> set
class Fragment(str):
    source: str
    def id(self) -> str

Templates

Configuration and Key Management

User configuration directory management, API key storage and retrieval, model aliases, and default settings with environment variable support.

def user_dir() -> pathlib.Path
def get_key(input=None, alias=None, env=None) -> Optional[str]
def set_alias(alias: str, model_id: str)
def get_default_model() -> str
def set_default_model(model: str)

Configuration

Plugin System

Extensible plugin architecture with hook specifications for registering models, tools, templates, and commands. Enables third-party extensions and custom integrations.

def get_plugins(all=False) -> List[dict]
def get_tools() -> Dict[str, Union[Tool, Type[Toolbox]]]
@hookimpl
def register_models(register): ...

Plugins

Error Handling

The package provides comprehensive error handling through specialized exception classes:

class ModelError(Exception): ...
class NeedsKeyException(ModelError): ...
class UnknownModelError(KeyError): ...
class CancelToolCall(Exception): ...

Common error scenarios include missing API keys, unknown model names, tool execution cancellation, and network connectivity issues.

CLI Interface

The package includes a full-featured command-line interface accessible via the llm command:

# Basic usage
llm "What is the capital of France?"

# With specific model
llm -m claude-3-sonnet "Explain quantum computing"

# Interactive conversation
llm chat

# Manage models and keys
llm models list
llm keys set openai

# Embedding operations
llm embed "Some text" | llm similar

The CLI provides access to all library functionality including model management, conversation handling, tool execution, embedding operations, and plugin management.

Constants and Utilities

DEFAULT_MODEL = "gpt-4o-mini"

def encode(values: List[float]) -> bytes
def decode(binary: bytes) -> List[float]
def schema_dsl(schema_dsl: str) -> Dict[str, Any]

The package includes utility functions for vector encoding/decoding, schema generation from DSL strings, and various text processing operations.