The official Python library for the anthropic API
—
Pending
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Pending
The risk profile of this skill
The official Python library for the Anthropic REST API, providing type-safe access to Claude AI models with both sync and async support.
pip install anthropicpip install anthropicPlatform-specific extras:
pip install anthropic[bedrock] - AWS Bedrockpip install anthropic[vertex] - Google Vertex AIpip install anthropic[aiohttp] - Alternative async clientfrom anthropic import Anthropic
client = Anthropic() # Reads ANTHROPIC_API_KEY from environment
message = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello, Claude"}]
)
print(message.content[0].text)from anthropic import AsyncAnthropic
client = AsyncAnthropic()
message = await client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}]
)with client.messages.stream(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a story"}]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)Claude 4.5 (Latest):
claude-opus-4-5-20250929 - Most capableclaude-sonnet-4-5-20250929 - Balanced (recommended)Claude 3.5:
claude-3-5-sonnet-20241022 - Previous Sonnetclaude-3-5-haiku-20241022 - Fast and cost-effective→ Complete model list and selection guide
Choose based on your use case:
Fast lookup for method signatures and parameters:
Messages API - Core message creation
create() - Create single messagestream() - Stream message responsecount_tokens() - Estimate token usageStreaming API - Real-time response processing
stream() - Context manager for streamingTools API - Function calling
@beta_tool decoratortool_runner() - Auto-executionBatches API - Async batch processing
create() - Submit batchretrieve() - Check statusresults() - Get outputsModels API - Model information
retrieve() - Get model detailslist() - Browse available modelsIn-depth reference with all parameters, types, and examples:
Access experimental capabilities via client.beta namespace:
Beta Overview - Introduction to beta features
Message Enhancement Features:
Resource Management:
from anthropic import Anthropic
# Environment variable (recommended)
client = Anthropic() # Uses ANTHROPIC_API_KEY
# Explicit API key
client = Anthropic(api_key="your-api-key")
# Context manager (automatic cleanup)
with Anthropic() as client:
message = client.messages.create(...)import httpx
# Custom timeout
client = Anthropic(timeout=120.0)
# Granular timeout
client = Anthropic(
timeout=httpx.Timeout(
connect=10.0,
read=60.0,
write=10.0,
pool=10.0
)
)
# Retry configuration
client = Anthropic(max_retries=5)
# Custom headers
client = Anthropic(
default_headers={"X-Custom": "value"}
)→ Complete configuration reference
from anthropic import APIError, RateLimitError
try:
message = client.messages.create(...)
except RateLimitError as e:
retry_after = e.response.headers.get("retry-after")
print(f"Rate limited. Retry after {retry_after}s")
except APIError as e:
print(f"API error: {e.message}")AnthropicError
├── APIError
│ ├── APIStatusError
│ │ ├── BadRequestError (400)
│ │ ├── AuthenticationError (401)
│ │ ├── PermissionDeniedError (403)
│ │ ├── NotFoundError (404)
│ │ ├── RateLimitError (429)
│ │ ├── InternalServerError (≥500)
│ ├── APIConnectionError
│ ├── APITimeoutError
│ └── APIResponseValidationError→ Complete error reference and retry patterns
→ Error handling guide with advanced patterns
ANTHROPIC_API_KEY - API key for authentication (required)ANTHROPIC_BASE_URL - Override base URL (optional)ANTHROPIC_AUTH_TOKEN - Bearer token alternative (optional)Platform-specific variables documented in platform guides.
client.messages # Messages resource
.create() # Create message
.stream() # Stream message
.count_tokens() # Count tokens
.batches # Batches sub-resource
.create() # Create batch
.retrieve() # Get batch status
.list() # List batches
.cancel() # Cancel batch
.delete() # Delete batch
.results() # Get batch results
client.beta # Beta features namespace
.messages # Beta messages with additional features
.create() # Create with beta features
.stream() # Stream with beta features
.tool_runner() # Auto-execute tools
.skills # Skills management
.files # File management
client.models # Models information
.retrieve() # Get model info
.list() # List modelsAll requests and responses use Pydantic models for type safety:
class Message(BaseModel):
id: str
type: Literal["message"]
role: Literal["assistant"]
content: list[ContentBlock]
model: str
stop_reason: StopReason | None
usage: Usage
ContentBlock = Union[TextBlock, ToolUseBlock]
StopReason = Literal["end_turn", "max_tokens", "stop_sequence", "tool_use"]→ Basic Messaging or Messages API
→ Multimodal Input or Multimodal Guide
→ Tool Integration or Tools API
→ Streaming Responses or Streaming API
→ Batch Processing Guide or Batches API
→ Error Reference or Error Handling Guide
→ Beta Overview → Extended Thinking
→ Beta Overview → Message Features
→ Platform Integrations → Choose your platform
# Client Configuration Constants
DEFAULT_TIMEOUT: float = 600.0 # 10 minutes default timeout for requests
DEFAULT_MAX_RETRIES: int = 2 # Default number of retry attempts
DEFAULT_CONNECTION_LIMITS: httpx.Limits # Default HTTP connection pool limits
# Legacy Text Completion Prompt Constants
HUMAN_PROMPT: str = "\n\nHuman:" # Legacy prompt marker for human messages
AI_PROMPT: str = "\n\nAssistant:" # Legacy prompt marker for AI responses
# Sentinel Values
NOT_GIVEN: NotGiven # Sentinel indicating parameter not providedNote: HUMAN_PROMPT and AI_PROMPT are legacy constants for the deprecated Text Completions API. Use the Messages API instead for new applications.