or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

async.mdcontent.mdhybrid-rag.mdindex.mdmapping.mdsearch.md
tile.json

tessl/pypi-tavily-python

Python wrapper for the Tavily API with search, extract, crawl, and map capabilities

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/tavily-python@0.7.x

To install, run

npx @tessl/cli install tessl/pypi-tavily-python@0.7.0

index.mddocs/

Tavily Python

A comprehensive Python wrapper for the Tavily API that enables intelligent web search, content extraction, web crawling, and website mapping capabilities. Designed for seamless integration into AI applications, RAG systems, and research tools with built-in error handling and support for both synchronous and asynchronous operations.

Package Information

  • Package Name: tavily-python
  • Language: Python
  • Installation: pip install tavily-python
  • Requirements: Python ≥3.6

Core Imports

from tavily import TavilyClient, AsyncTavilyClient

Note: The legacy Client class is also available but deprecated. Use TavilyClient instead:

from tavily import Client  # Deprecated - use TavilyClient instead

For error handling:

from tavily import (
    TavilyClient,
    InvalidAPIKeyError,
    UsageLimitExceededError,
    MissingAPIKeyError,
    BadRequestError
)

Additional error types are available directly from the errors module (used internally by the client):

from tavily.errors import ForbiddenError, TimeoutError

Note: ForbiddenError and TimeoutError are internal exception classes used by the client but not exported in the main package. They can be imported directly from the errors module if needed for advanced error handling.

For hybrid RAG functionality:

from tavily import TavilyHybridClient

Basic Usage

from tavily import TavilyClient

# Initialize client with API key
client = TavilyClient(api_key="tvly-YOUR_API_KEY")

# Perform a basic search
response = client.search("What is machine learning?")
print(response)

# Get search context for RAG applications
context = client.get_search_context(
    query="Latest developments in AI",
    max_tokens=4000
)

# Extract content from URLs
content = client.extract(["https://example.com/article"])

# Crawl a website
crawl_results = client.crawl(
    "https://docs.python.org",
    max_depth=2,
    limit=10
)

Architecture

The Tavily Python wrapper is built around several key components:

  • Client Classes: Synchronous (TavilyClient) and asynchronous (AsyncTavilyClient) API clients
  • Hybrid RAG: Integration with vector databases for combined local and web search
  • Error Handling: Comprehensive exception hierarchy for robust error management
  • Token Management: Utilities for managing response sizes and token limits
  • Proxy Support: Built-in support for HTTP/HTTPS proxies

This design enables flexible integration patterns while maintaining simplicity for basic use cases and power for advanced applications.

Capabilities

Web Search Operations

Comprehensive web search functionality including basic and advanced search modes, topic-focused searches, time-constrained queries, and specialized Q&A and company information searches.

def search(
    query: str,
    search_depth: Literal["basic", "advanced"] = None,
    topic: Literal["general", "news", "finance"] = None,
    max_results: int = None,
    **kwargs
) -> dict: ...

def get_search_context(
    query: str,
    max_tokens: int = 4000,
    **kwargs
) -> str: ...

def qna_search(
    query: str,
    search_depth: Literal["basic", "advanced"] = "advanced",
    **kwargs
) -> str: ...

Search Operations

Content Extraction and Web Crawling

Extract content from individual URLs or crawl entire websites with intelligent navigation, content filtering, and structured data extraction capabilities.

def extract(
    urls: Union[List[str], str],
    extract_depth: Literal["basic", "advanced"] = None,
    format: Literal["markdown", "text"] = None,
    **kwargs
) -> dict: ...

def crawl(
    url: str,
    max_depth: int = None,
    max_breadth: int = None,
    instructions: str = None,
    **kwargs
) -> dict: ...

Content Operations

Website Structure Mapping

Discover and map website structures without extracting full content, useful for understanding site architecture and finding relevant pages before detailed crawling.

def map(
    url: str,
    max_depth: int = None,
    max_breadth: int = None,
    instructions: str = None,
    **kwargs
) -> dict: ...

Website Mapping

Asynchronous Operations

Full async/await support for all API operations, enabling high-performance concurrent requests and integration with async frameworks.

class AsyncTavilyClient:
    async def search(self, query: str, **kwargs) -> dict: ...
    async def extract(self, urls: Union[List[str], str], **kwargs) -> dict: ...
    async def crawl(self, url: str, **kwargs) -> dict: ...
    async def map(self, url: str, **kwargs) -> dict: ...

Async Operations

Hybrid RAG Integration

Combine Tavily's web search capabilities with local vector database searches for enhanced RAG applications, supporting MongoDB with custom embedding and ranking functions.

class TavilyHybridClient:
    def __init__(
        self,
        api_key: str,
        db_provider: Literal['mongodb'],
        collection,
        index: str,
        **kwargs
    ): ...
    
    def search(
        self,
        query: str,
        max_results: int = 10,
        save_foreign: bool = False,
        **kwargs
    ) -> list: ...

Hybrid RAG

Error Handling

All API operations can raise specific exceptions for different error conditions:

class InvalidAPIKeyError(Exception): ...
class UsageLimitExceededError(Exception): ...
class MissingAPIKeyError(Exception): ...
class BadRequestError(Exception): ...
class ForbiddenError(Exception): ...
class TimeoutError(Exception): ...

Handle errors appropriately in your applications:

from tavily import TavilyClient, InvalidAPIKeyError, UsageLimitExceededError, BadRequestError
from tavily.errors import ForbiddenError, TimeoutError

client = TavilyClient(api_key="your-key")

try:
    results = client.search("your query")
except InvalidAPIKeyError:
    print("Invalid API key provided")
except UsageLimitExceededError:
    print("API usage limit exceeded")
except ForbiddenError:
    print("Access forbidden")
except TimeoutError as e:
    print(f"Request timed out after {e.timeout} seconds")
except BadRequestError as e:
    print(f"Bad request: {e}")
except Exception as e:
    print(f"Unexpected error: {e}")

Types

# Type aliases for parameter constraints
SearchDepth = Literal["basic", "advanced"]
Topic = Literal["general", "news", "finance"]
TimeRange = Literal["day", "week", "month", "year"]
ExtractDepth = Literal["basic", "advanced"]
ContentFormat = Literal["markdown", "text"]
AnswerMode = Union[bool, Literal["basic", "advanced"]]
RawContentMode = Union[bool, Literal["markdown", "text"]]

# Common parameter types
Domains = Sequence[str]
Paths = Sequence[str]