or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

agents.mdaudio.mdbatch.mdbeta.mdchat-completions.mdclassification.mdembeddings.mdfiles.mdfim.mdfine-tuning.mdindex.mdmodels.mdocr.md

agents.mddocs/

0

# Agents

1

2

Generate completions using pre-configured AI agents with specialized tools and context. The Agents API provides completion functionality with agents that have been created through other channels (web console, API, etc.).

3

4

## Capabilities

5

6

### Agent Completion

7

8

Generate responses using a configured agent identified by its ID. The agent completion API supports both synchronous and streaming responses.

9

10

```python { .api }

11

def complete(

12

messages: List[Union[SystemMessage, UserMessage, AssistantMessage, ToolMessage]],

13

agent_id: str,

14

max_tokens: Optional[int] = None,

15

stream: Optional[bool] = False,

16

stop: Optional[Union[str, List[str]]] = None,

17

random_seed: Optional[int] = None,

18

response_format: Optional[ResponseFormat] = None,

19

tools: Optional[List[Tool]] = None,

20

tool_choice: Optional[Union[str, ToolChoice]] = None,

21

presence_penalty: Optional[float] = None,

22

frequency_penalty: Optional[float] = None,

23

n: Optional[int] = None,

24

prediction: Optional[Prediction] = None,

25

parallel_tool_calls: Optional[bool] = None,

26

prompt_mode: Optional[str] = None,

27

**kwargs

28

) -> ChatCompletionResponse:

29

"""

30

Generate completion using an agent.

31

32

Parameters:

33

- messages: The prompt(s) to generate completions for, encoded as a list with role and content

34

- agent_id: The ID of the agent to use for this completion

35

- max_tokens: The maximum number of tokens to generate

36

- stream: Whether to stream back partial progress (defaults to False)

37

- stop: Up to 4 sequences where the API will stop generating further tokens

38

- random_seed: The seed to use for random sampling

39

- response_format: Format specification for structured outputs

40

- tools: A list of tools the model may call

41

- tool_choice: Controls which (if any) tool is called by the model

42

- presence_penalty: Number between -2.0 and 2.0 for presence penalty

43

- frequency_penalty: Number between -2.0 and 2.0 for frequency penalty

44

- n: How many chat completion choices to generate for each input message

45

- prediction: Prediction object for speculative decoding

46

- parallel_tool_calls: Whether to enable parallel function calling

47

- prompt_mode: Allows toggling between reasoning mode and no system prompt

48

49

Returns:

50

ChatCompletionResponse with agent-generated content

51

"""

52

```

53

54

### Agent Streaming

55

56

Stream completions from agents for real-time response generation.

57

58

```python { .api }

59

def stream(

60

messages: List[Union[SystemMessage, UserMessage, AssistantMessage, ToolMessage]],

61

agent_id: str,

62

max_tokens: Optional[int] = None,

63

stream: Optional[bool] = True,

64

stop: Optional[Union[str, List[str]]] = None,

65

random_seed: Optional[int] = None,

66

response_format: Optional[ResponseFormat] = None,

67

tools: Optional[List[Tool]] = None,

68

tool_choice: Optional[Union[str, ToolChoice]] = None,

69

presence_penalty: Optional[float] = None,

70

frequency_penalty: Optional[float] = None,

71

n: Optional[int] = None,

72

prediction: Optional[Prediction] = None,

73

parallel_tool_calls: Optional[bool] = None,

74

prompt_mode: Optional[str] = None,

75

**kwargs

76

) -> Iterator[CompletionEvent]:

77

"""

78

Stream completion using an agent.

79

80

Parameters:

81

- messages: The prompt(s) to generate completions for, encoded as a list with role and content

82

- agent_id: The ID of the agent to use for this completion

83

- max_tokens: The maximum number of tokens to generate

84

- stream: Whether to stream back partial progress (defaults to True)

85

- stop: Up to 4 sequences where the API will stop generating further tokens

86

- random_seed: The seed to use for random sampling

87

- response_format: Format specification for structured outputs

88

- tools: A list of tools the model may call

89

- tool_choice: Controls which (if any) tool is called by the model

90

- presence_penalty: Number between -2.0 and 2.0 for presence penalty

91

- frequency_penalty: Number between -2.0 and 2.0 for frequency penalty

92

- n: How many chat completion choices to generate for each input message

93

- prediction: Prediction object for speculative decoding

94

- parallel_tool_calls: Whether to enable parallel function calling

95

- prompt_mode: Allows toggling between reasoning mode and no system prompt

96

97

Returns:

98

Iterator of CompletionEvent objects with streaming content

99

"""

100

```

101

102

## Usage Examples

103

104

### Basic Agent Completion

105

106

```python

107

from mistralai import Mistral

108

from mistralai.models import UserMessage

109

110

client = Mistral(api_key="your-api-key")

111

112

# Use an existing agent for completion

113

messages = [

114

UserMessage(content="What is the capital of France? Please provide some context about the city.")

115

]

116

117

response = client.agents.complete(

118

messages=messages,

119

agent_id="ag_your_agent_id_here",

120

max_tokens=500

121

)

122

123

print(response.choices[0].message.content)

124

```

125

126

### Streaming Agent Completion

127

128

```python

129

from mistralai.models import UserMessage

130

131

# Stream completion for real-time response

132

messages = [

133

UserMessage(content="Write a brief story about a robot learning to paint.")

134

]

135

136

for chunk in client.agents.stream(

137

messages=messages,

138

agent_id="ag_your_agent_id_here",

139

max_tokens=800

140

):

141

if chunk.data.choices:

142

delta = chunk.data.choices[0].delta

143

if delta.content:

144

print(delta.content, end="", flush=True)

145

```

146

147

### Agent with Tools

148

149

```python

150

from mistralai.models import UserMessage, FunctionTool, Function

151

152

# Using an agent configured with tools

153

messages = [

154

UserMessage(content="What's the weather like in Paris today?")

155

]

156

157

response = client.agents.complete(

158

messages=messages,

159

agent_id="ag_weather_agent_id",

160

tools=[

161

FunctionTool(

162

type="function",

163

function=Function(

164

name="get_weather",

165

description="Get current weather for a location",

166

parameters={

167

"type": "object",

168

"properties": {

169

"location": {"type": "string", "description": "City name"}

170

},

171

"required": ["location"]

172

}

173

)

174

)

175

],

176

tool_choice="auto"

177

)

178

179

# Handle tool calls if present

180

if response.choices[0].message.tool_calls:

181

for tool_call in response.choices[0].message.tool_calls:

182

function_name = tool_call.function.name

183

function_args = tool_call.function.arguments

184

print(f"Agent called: {function_name} with args: {function_args}")

185

```

186

187

## Types

188

189

### Request Types

190

191

```python { .api }

192

class AgentsCompletionRequest:

193

messages: List[Union[SystemMessage, UserMessage, AssistantMessage, ToolMessage]]

194

agent_id: str

195

max_tokens: Optional[int]

196

stream: Optional[bool]

197

stop: Optional[Union[str, List[str]]]

198

random_seed: Optional[int]

199

response_format: Optional[ResponseFormat]

200

tools: Optional[List[Tool]]

201

tool_choice: Optional[Union[str, ToolChoice]]

202

presence_penalty: Optional[float]

203

frequency_penalty: Optional[float]

204

n: Optional[int]

205

prediction: Optional[Prediction]

206

parallel_tool_calls: Optional[bool]

207

prompt_mode: Optional[str]

208

209

class AgentsCompletionStreamRequest:

210

messages: List[Union[SystemMessage, UserMessage, AssistantMessage, ToolMessage]]

211

agent_id: str

212

max_tokens: Optional[int]

213

stream: Optional[bool]

214

stop: Optional[Union[str, List[str]]]

215

random_seed: Optional[int]

216

response_format: Optional[ResponseFormat]

217

tools: Optional[List[Tool]]

218

tool_choice: Optional[Union[str, ToolChoice]]

219

presence_penalty: Optional[float]

220

frequency_penalty: Optional[float]

221

n: Optional[int]

222

prediction: Optional[Prediction]

223

parallel_tool_calls: Optional[bool]

224

prompt_mode: Optional[str]

225

```

226

227

### Response Types

228

229

```python { .api }

230

class ChatCompletionResponse:

231

id: str

232

object: str

233

created: int

234

model: str

235

choices: List[ChatCompletionChoice]

236

usage: Optional[UsageInfo]

237

238

class CompletionEvent:

239

data: ChatCompletionResponse

240

event: str

241

id: Optional[str]

242

```