CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-mistralai

Python Client SDK for the Mistral AI API with chat completions, embeddings, fine-tuning, and agent capabilities.

Pending
Overview
Eval results
Files

agents.mddocs/

Agents

Generate completions using pre-configured AI agents with specialized tools and context. The Agents API provides completion functionality with agents that have been created through other channels (web console, API, etc.).

Capabilities

Agent Completion

Generate responses using a configured agent identified by its ID. The agent completion API supports both synchronous and streaming responses.

def complete(
    messages: List[Union[SystemMessage, UserMessage, AssistantMessage, ToolMessage]],
    agent_id: str,
    max_tokens: Optional[int] = None,
    stream: Optional[bool] = False,
    stop: Optional[Union[str, List[str]]] = None,
    random_seed: Optional[int] = None,
    response_format: Optional[ResponseFormat] = None,
    tools: Optional[List[Tool]] = None,
    tool_choice: Optional[Union[str, ToolChoice]] = None,
    presence_penalty: Optional[float] = None,
    frequency_penalty: Optional[float] = None,
    n: Optional[int] = None,
    prediction: Optional[Prediction] = None,
    parallel_tool_calls: Optional[bool] = None,
    prompt_mode: Optional[str] = None,
    **kwargs
) -> ChatCompletionResponse:
    """
    Generate completion using an agent.

    Parameters:
    - messages: The prompt(s) to generate completions for, encoded as a list with role and content
    - agent_id: The ID of the agent to use for this completion
    - max_tokens: The maximum number of tokens to generate
    - stream: Whether to stream back partial progress (defaults to False)
    - stop: Up to 4 sequences where the API will stop generating further tokens
    - random_seed: The seed to use for random sampling
    - response_format: Format specification for structured outputs
    - tools: A list of tools the model may call
    - tool_choice: Controls which (if any) tool is called by the model
    - presence_penalty: Number between -2.0 and 2.0 for presence penalty
    - frequency_penalty: Number between -2.0 and 2.0 for frequency penalty
    - n: How many chat completion choices to generate for each input message
    - prediction: Prediction object for speculative decoding
    - parallel_tool_calls: Whether to enable parallel function calling
    - prompt_mode: Allows toggling between reasoning mode and no system prompt

    Returns:
    ChatCompletionResponse with agent-generated content
    """

Agent Streaming

Stream completions from agents for real-time response generation.

def stream(
    messages: List[Union[SystemMessage, UserMessage, AssistantMessage, ToolMessage]],
    agent_id: str,
    max_tokens: Optional[int] = None,
    stream: Optional[bool] = True,
    stop: Optional[Union[str, List[str]]] = None,
    random_seed: Optional[int] = None,
    response_format: Optional[ResponseFormat] = None,
    tools: Optional[List[Tool]] = None,
    tool_choice: Optional[Union[str, ToolChoice]] = None,
    presence_penalty: Optional[float] = None,
    frequency_penalty: Optional[float] = None,
    n: Optional[int] = None,
    prediction: Optional[Prediction] = None,
    parallel_tool_calls: Optional[bool] = None,
    prompt_mode: Optional[str] = None,
    **kwargs
) -> Iterator[CompletionEvent]:
    """
    Stream completion using an agent.

    Parameters:
    - messages: The prompt(s) to generate completions for, encoded as a list with role and content
    - agent_id: The ID of the agent to use for this completion
    - max_tokens: The maximum number of tokens to generate
    - stream: Whether to stream back partial progress (defaults to True)
    - stop: Up to 4 sequences where the API will stop generating further tokens
    - random_seed: The seed to use for random sampling
    - response_format: Format specification for structured outputs
    - tools: A list of tools the model may call
    - tool_choice: Controls which (if any) tool is called by the model
    - presence_penalty: Number between -2.0 and 2.0 for presence penalty
    - frequency_penalty: Number between -2.0 and 2.0 for frequency penalty
    - n: How many chat completion choices to generate for each input message
    - prediction: Prediction object for speculative decoding
    - parallel_tool_calls: Whether to enable parallel function calling
    - prompt_mode: Allows toggling between reasoning mode and no system prompt

    Returns:
    Iterator of CompletionEvent objects with streaming content
    """

Usage Examples

Basic Agent Completion

from mistralai import Mistral
from mistralai.models import UserMessage

client = Mistral(api_key="your-api-key")

# Use an existing agent for completion
messages = [
    UserMessage(content="What is the capital of France? Please provide some context about the city.")
]

response = client.agents.complete(
    messages=messages,
    agent_id="ag_your_agent_id_here",
    max_tokens=500
)

print(response.choices[0].message.content)

Streaming Agent Completion

from mistralai.models import UserMessage

# Stream completion for real-time response
messages = [
    UserMessage(content="Write a brief story about a robot learning to paint.")
]

for chunk in client.agents.stream(
    messages=messages,
    agent_id="ag_your_agent_id_here",
    max_tokens=800
):
    if chunk.data.choices:
        delta = chunk.data.choices[0].delta
        if delta.content:
            print(delta.content, end="", flush=True)

Agent with Tools

from mistralai.models import UserMessage, FunctionTool, Function

# Using an agent configured with tools
messages = [
    UserMessage(content="What's the weather like in Paris today?")
]

response = client.agents.complete(
    messages=messages,
    agent_id="ag_weather_agent_id",
    tools=[
        FunctionTool(
            type="function",
            function=Function(
                name="get_weather",
                description="Get current weather for a location",
                parameters={
                    "type": "object",
                    "properties": {
                        "location": {"type": "string", "description": "City name"}
                    },
                    "required": ["location"]
                }
            )
        )
    ],
    tool_choice="auto"
)

# Handle tool calls if present
if response.choices[0].message.tool_calls:
    for tool_call in response.choices[0].message.tool_calls:
        function_name = tool_call.function.name
        function_args = tool_call.function.arguments
        print(f"Agent called: {function_name} with args: {function_args}")

Types

Request Types

class AgentsCompletionRequest:
    messages: List[Union[SystemMessage, UserMessage, AssistantMessage, ToolMessage]]
    agent_id: str
    max_tokens: Optional[int]
    stream: Optional[bool]
    stop: Optional[Union[str, List[str]]]
    random_seed: Optional[int]
    response_format: Optional[ResponseFormat]
    tools: Optional[List[Tool]]
    tool_choice: Optional[Union[str, ToolChoice]]
    presence_penalty: Optional[float]
    frequency_penalty: Optional[float]
    n: Optional[int]
    prediction: Optional[Prediction]
    parallel_tool_calls: Optional[bool]
    prompt_mode: Optional[str]

class AgentsCompletionStreamRequest:
    messages: List[Union[SystemMessage, UserMessage, AssistantMessage, ToolMessage]]
    agent_id: str
    max_tokens: Optional[int]
    stream: Optional[bool]
    stop: Optional[Union[str, List[str]]]
    random_seed: Optional[int]
    response_format: Optional[ResponseFormat]
    tools: Optional[List[Tool]]
    tool_choice: Optional[Union[str, ToolChoice]]
    presence_penalty: Optional[float]
    frequency_penalty: Optional[float]
    n: Optional[int]
    prediction: Optional[Prediction]
    parallel_tool_calls: Optional[bool]
    prompt_mode: Optional[str]

Response Types

class ChatCompletionResponse:
    id: str
    object: str
    created: int
    model: str
    choices: List[ChatCompletionChoice]
    usage: Optional[UsageInfo]

class CompletionEvent:
    data: ChatCompletionResponse
    event: str
    id: Optional[str]

Install with Tessl CLI

npx tessl i tessl/pypi-mistralai

docs

agents.md

audio.md

batch.md

beta.md

chat-completions.md

classification.md

embeddings.md

files.md

fim.md

fine-tuning.md

index.md

models.md

ocr.md

tile.json