CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-anthropic

The official Python library for the anthropic API

Pending

Quality

Pending

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

Overview
Eval results
Files

index.mddocs/

Anthropic Python SDK

The official Python library for the Anthropic REST API, providing type-safe access to Claude AI models with both sync and async support.

Package Information

  • Package Name: anthropic
  • Package Type: Python SDK
  • Language: Python 3.8+
  • Installation: pip install anthropic
  • Repository: https://github.com/anthropics/anthropic-sdk-python
  • License: MIT

Installation

pip install anthropic

Platform-specific extras:

  • pip install anthropic[bedrock] - AWS Bedrock
  • pip install anthropic[vertex] - Google Vertex AI
  • pip install anthropic[aiohttp] - Alternative async client

Quick Start

Basic Message

from anthropic import Anthropic

client = Anthropic()  # Reads ANTHROPIC_API_KEY from environment

message = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello, Claude"}]
)

print(message.content[0].text)

Async Message

from anthropic import AsyncAnthropic

client = AsyncAnthropic()
message = await client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello"}]
)

Stream Response

with client.messages.stream(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Write a story"}]
) as stream:
    for text in stream.text_stream:
        print(text, end="", flush=True)

Available Models

Claude 4.5 (Latest):

  • claude-opus-4-5-20250929 - Most capable
  • claude-sonnet-4-5-20250929 - Balanced (recommended)

Claude 3.5:

  • claude-3-5-sonnet-20241022 - Previous Sonnet
  • claude-3-5-haiku-20241022 - Fast and cost-effective

→ Complete model list and selection guide

Common Tasks

Choose based on your use case:

Basic Messaging

  • Simple text conversations
  • System prompts
  • Multi-turn conversations
  • Temperature control

Multimodal Input

  • Image analysis (JPG, PNG, GIF, WebP)
  • PDF document processing
  • Mixed content (text + images + documents)

Tool Integration

  • Function calling basics
  • Auto-execution with tool_runner
  • Async tools
  • Error handling in tools

Streaming Responses

  • Real-time text streaming
  • Event-based processing
  • Token usage tracking
  • Error handling

Batch Processing

  • Process thousands of requests
  • 50% cost reduction
  • High-throughput scenarios

API Quick Reference

Fast lookup for method signatures and parameters:

Messages API - Core message creation

  • create() - Create single message
  • stream() - Stream message response
  • count_tokens() - Estimate token usage

Streaming API - Real-time response processing

  • stream() - Context manager for streaming
  • Event types and handling
  • Helper methods

Tools API - Function calling

  • @beta_tool decorator
  • tool_runner() - Auto-execution
  • Manual tool handling

Batches API - Async batch processing

  • create() - Submit batch
  • retrieve() - Check status
  • results() - Get outputs

Models API - Model information

  • retrieve() - Get model details
  • list() - Browse available models

Detailed API Documentation

In-depth reference with all parameters, types, and examples:

Core APIs

  • Messages API - Complete messages API reference with all parameters, types, and examples
  • Streaming API - Detailed streaming architecture, events, and patterns
  • Tools API - Function calling with decorators and manual definitions
  • Batches API - Batch processing for high-throughput use cases
  • Models API - Model information and selection
  • Completions API - Legacy text completions API (deprecated, use Messages API instead)

Implementation Guides

Platform Integrations

Reference Documentation

Beta Features

Access experimental capabilities via client.beta namespace:

Beta Overview - Introduction to beta features

Message Enhancement Features:

→ All Message Features

Resource Management:

Client Configuration

Basic Setup

from anthropic import Anthropic

# Environment variable (recommended)
client = Anthropic()  # Uses ANTHROPIC_API_KEY

# Explicit API key
client = Anthropic(api_key="your-api-key")

# Context manager (automatic cleanup)
with Anthropic() as client:
    message = client.messages.create(...)

Common Configurations

import httpx

# Custom timeout
client = Anthropic(timeout=120.0)

# Granular timeout
client = Anthropic(
    timeout=httpx.Timeout(
        connect=10.0,
        read=60.0,
        write=10.0,
        pool=10.0
    )
)

# Retry configuration
client = Anthropic(max_retries=5)

# Custom headers
client = Anthropic(
    default_headers={"X-Custom": "value"}
)

→ Complete configuration reference

Error Handling

Basic Pattern

from anthropic import APIError, RateLimitError

try:
    message = client.messages.create(...)
except RateLimitError as e:
    retry_after = e.response.headers.get("retry-after")
    print(f"Rate limited. Retry after {retry_after}s")
except APIError as e:
    print(f"API error: {e.message}")

Exception Hierarchy

AnthropicError
├── APIError
│   ├── APIStatusError
│   │   ├── BadRequestError (400)
│   │   ├── AuthenticationError (401)
│   │   ├── PermissionDeniedError (403)
│   │   ├── NotFoundError (404)
│   │   ├── RateLimitError (429)
│   │   ├── InternalServerError (≥500)
│   ├── APIConnectionError
│   ├── APITimeoutError
│   └── APIResponseValidationError

→ Complete error reference and retry patterns

→ Error handling guide with advanced patterns

Environment Variables

  • ANTHROPIC_API_KEY - API key for authentication (required)
  • ANTHROPIC_BASE_URL - Override base URL (optional)
  • ANTHROPIC_AUTH_TOKEN - Bearer token alternative (optional)

Platform-specific variables documented in platform guides.

SDK Architecture

Client Hierarchy

  • Anthropic / AsyncAnthropic - Main clients for direct API access
  • AnthropicBedrock / AsyncAnthropicBedrock - AWS Bedrock integration
  • AnthropicVertex / AsyncAnthropicVertex - Google Vertex AI integration
  • AnthropicFoundry / AsyncAnthropicFoundry - Azure AI Foundry integration

Resource Structure

client.messages          # Messages resource
  .create()              # Create message
  .stream()              # Stream message
  .count_tokens()        # Count tokens
  .batches               # Batches sub-resource
    .create()            # Create batch
    .retrieve()          # Get batch status
    .list()              # List batches
    .cancel()            # Cancel batch
    .delete()            # Delete batch
    .results()           # Get batch results

client.beta              # Beta features namespace
  .messages              # Beta messages with additional features
    .create()            # Create with beta features
    .stream()            # Stream with beta features
    .tool_runner()       # Auto-execute tools
  .skills                # Skills management
  .files                 # File management

client.models            # Models information
  .retrieve()            # Get model info
  .list()                # List models

Type System

All requests and responses use Pydantic models for type safety:

class Message(BaseModel):
    id: str
    type: Literal["message"]
    role: Literal["assistant"]
    content: list[ContentBlock]
    model: str
    stop_reason: StopReason | None
    usage: Usage

ContentBlock = Union[TextBlock, ToolUseBlock]
StopReason = Literal["end_turn", "max_tokens", "stop_sequence", "tool_use"]

→ Complete type definitions

Decision Guide for Common Scenarios

"I need to send a message to Claude"

Basic Messaging or Messages API

"I need to process images or PDFs"

Multimodal Input or Multimodal Guide

"I need Claude to call functions/use tools"

Tool Integration or Tools API

"I need real-time streaming output"

Streaming Responses or Streaming API

"I need to process thousands of messages"

Batch Processing Guide or Batches API

"I'm getting errors"

Error Reference or Error Handling Guide

"I need extended reasoning/thinking"

Beta OverviewExtended Thinking

"I need web search or code execution"

Beta OverviewMessage Features

"I'm using AWS/GCP/Azure"

Platform Integrations → Choose your platform

Package Constants

# Client Configuration Constants
DEFAULT_TIMEOUT: float = 600.0  # 10 minutes default timeout for requests
DEFAULT_MAX_RETRIES: int = 2  # Default number of retry attempts
DEFAULT_CONNECTION_LIMITS: httpx.Limits  # Default HTTP connection pool limits

# Legacy Text Completion Prompt Constants
HUMAN_PROMPT: str = "\n\nHuman:"  # Legacy prompt marker for human messages
AI_PROMPT: str = "\n\nAssistant:"  # Legacy prompt marker for AI responses

# Sentinel Values
NOT_GIVEN: NotGiven  # Sentinel indicating parameter not provided

Note: HUMAN_PROMPT and AI_PROMPT are legacy constants for the deprecated Text Completions API. Use the Messages API instead for new applications.

Support Resources

Install with Tessl CLI

npx tessl i tessl/pypi-anthropic@0.75.0

docs

index.md

tile.json