CLI utility and Python library for interacting with Large Language Models from multiple providers including OpenAI, Anthropic, Google, and Meta plus locally installed models.
npx @tessl/cli install tessl/pypi-llm@0.27.0A comprehensive CLI utility and Python library for interacting with Large Language Models from multiple providers including OpenAI, Anthropic's Claude, Google's Gemini, and Meta's Llama, supporting both remote APIs and locally installed models. The package provides extensive functionality for executing prompts, storing conversations, generating embeddings, extracting structured content, and enabling models to execute tools through an extensible plugin system.
pip install llmimport llmCommon imports for library usage:
from llm import get_model, get_async_model, Conversation, Response
from llm import AsyncModel, EmbeddingModelFor embedding operations:
from llm import Collection, get_embedding_modelFor tools and templates:
from llm import Tool, Toolbox, Templateimport llm
# Get a model (defaults to gpt-4o-mini)
model = llm.get_model()
# Send a prompt and get response
response = model.prompt("What is the capital of France?")
print(response.text())import llm
# Define a simple tool
def get_weather(location: str) -> str:
"""Get weather information for a location."""
return f"The weather in {location} is sunny and 75°F"
# Create tool and model
weather_tool = llm.Tool.function(get_weather)
model = llm.get_model("gpt-4")
# Have conversation with tool access
conversation = model.conversation()
response = conversation.prompt("What's the weather like in Paris?", tools=[weather_tool])
print(response.text())import llm
# Get embedding model and create collection
embedding_model = llm.get_embedding_model("ada-002")
collection = llm.Collection("documents", embedding_model)
# Add documents
collection.embed("doc1", "Paris is the capital of France")
collection.embed("doc2", "London is the capital of England")
# Find similar documents
results = collection.similar("French capital city")
for entry in results:
print(f"{entry.id}: {entry.content} (score: {entry.score})")The LLM package is built around several key architectural components:
This architecture enables the package to serve as both a standalone CLI tool and a comprehensive Python library while maintaining extensibility through its plugin system.
Core model management, conversation handling, prompt processing, and response streaming. Supports both synchronous and asynchronous operations with comprehensive error handling.
def get_model(name: Optional[str] = None) -> Model
def get_async_model(name: Optional[str] = None) -> AsyncModel
class Conversation:
def prompt(self, prompt, **kwargs) -> Response
class Response:
def text() -> str
def __iter__() -> Iterator[str]Function calling system with automatic schema generation, tool chaining, and error handling. Supports both individual tools and organized toolbox collections.
class Tool:
@classmethod
def function(cls, function, name=None, description=None) -> Tool
class Toolbox:
def tools(self) -> Iterable[Tool]
class ToolCall:
function: str
arguments: dictVector database operations with similarity search, metadata storage, and efficient batch processing. Supports multiple embedding models and custom similarity metrics.
class Collection:
def embed(self, id: str, value: Union[str, bytes], metadata=None, store=False)
def similar(self, value, number=10) -> List[Entry]
def similar_by_id(self, id: str, number=10) -> List[Entry]
def cosine_similarity(a: List[float], b: List[float]) -> floatPrompt template system with variable substitution, attachment handling, and fragment management for reusable prompt components.
class Template:
def evaluate(self, input: str, params=None) -> Tuple[Optional[str], Optional[str]]
def vars(self) -> set
class Fragment(str):
source: str
def id(self) -> strUser configuration directory management, API key storage and retrieval, model aliases, and default settings with environment variable support.
def user_dir() -> pathlib.Path
def get_key(input=None, alias=None, env=None) -> Optional[str]
def set_alias(alias: str, model_id: str)
def get_default_model() -> str
def set_default_model(model: str)Extensible plugin architecture with hook specifications for registering models, tools, templates, and commands. Enables third-party extensions and custom integrations.
def get_plugins(all=False) -> List[dict]
def get_tools() -> Dict[str, Union[Tool, Type[Toolbox]]]
@hookimpl
def register_models(register): ...The package provides comprehensive error handling through specialized exception classes:
class ModelError(Exception): ...
class NeedsKeyException(ModelError): ...
class UnknownModelError(KeyError): ...
class CancelToolCall(Exception): ...Common error scenarios include missing API keys, unknown model names, tool execution cancellation, and network connectivity issues.
The package includes a full-featured command-line interface accessible via the llm command:
# Basic usage
llm "What is the capital of France?"
# With specific model
llm -m claude-3-sonnet "Explain quantum computing"
# Interactive conversation
llm chat
# Manage models and keys
llm models list
llm keys set openai
# Embedding operations
llm embed "Some text" | llm similarThe CLI provides access to all library functionality including model management, conversation handling, tool execution, embedding operations, and plugin management.
DEFAULT_MODEL = "gpt-4o-mini"
def encode(values: List[float]) -> bytes
def decode(binary: bytes) -> List[float]
def schema_dsl(schema_dsl: str) -> Dict[str, Any]The package includes utility functions for vector encoding/decoding, schema generation from DSL strings, and various text processing operations.