or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

agents.mdcore-schema.mddocument-stores.mdevaluation-utilities.mdfile-processing.mdgenerators.mdindex.mdpipelines.mdreaders.mdretrievers.md

agents.mddocs/

0

# Agent System

1

2

Interactive LLM agents with tool usage, memory management, and conversational capabilities for complex reasoning tasks. Agents use large language models to reason through multi-step problems by selecting and using appropriate tools, maintaining conversation memory, and generating step-by-step solutions.

3

4

## Core Imports

5

6

```python

7

from haystack.agents import Agent, Tool, AgentStep

8

from haystack.agents.memory import Memory, ConversationMemory, ConversationSummaryMemory, NoMemory

9

from haystack.nodes import PromptNode

10

```

11

12

## Capabilities

13

14

### Agent

15

16

The main Agent class that uses tools and memory to answer complex queries through multi-step reasoning.

17

18

```python { .api }

19

class Agent:

20

def __init__(

21

self,

22

prompt_node: PromptNode,

23

prompt_template: Optional[Union[str, PromptTemplate]] = None,

24

tools_manager: Optional[ToolsManager] = None,

25

memory: Optional[Memory] = None,

26

prompt_parameters_resolver: Optional[Callable] = None,

27

max_steps: int = 8,

28

final_answer_pattern: str = r"Final Answer\s*:\s*(.*)",

29

streaming: bool = True,

30

):

31

"""

32

Creates an Agent instance for multi-step reasoning and tool usage.

33

34

Args:

35

prompt_node: PromptNode for decision making and tool selection

36

prompt_template: Custom template for reasoning process

37

tools_manager: Manager for available tools

38

memory: Memory system for conversation context

39

prompt_parameters_resolver: Function to resolve prompt parameters

40

max_steps: Maximum reasoning steps before stopping

41

final_answer_pattern: Regex pattern to extract final answers

42

streaming: Whether to stream LLM responses

43

"""

44

45

def run(self, query: str, params: Optional[dict] = None) -> AgentStep:

46

"""Execute multi-step reasoning to answer a query."""

47

48

def add_tool(self, tool: Tool) -> None:

49

"""Add a tool to the agent's available tools."""

50

```

51

52

### Tool

53

54

Represents a pipeline or node that an Agent can use to perform specific tasks.

55

56

```python { .api }

57

class Tool:

58

def __init__(

59

self,

60

name: str,

61

pipeline_or_node: Union[BaseComponent, Pipeline, Callable[[Any], str]],

62

description: str,

63

output_variable: str = "results",

64

logging_color: Color = Color.YELLOW,

65

):

66

"""

67

Create a tool that an Agent can use.

68

69

Args:

70

name: Short name for the tool (letters, digits, underscores only)

71

pipeline_or_node: Pipeline, node, or callable to execute

72

description: Description for when to use this tool

73

output_variable: Variable name for tool output

74

logging_color: Color for logging output

75

"""

76

77

def run(self, tool_input: str, params: Optional[dict] = None) -> str:

78

"""Execute the tool with given input."""

79

```

80

81

### AgentStep

82

83

Represents a single step in the Agent's reasoning process.

84

85

```python { .api }

86

class AgentStep:

87

def __init__(

88

self,

89

current_step: int = 1,

90

max_steps: int = 8,

91

final_answer_pattern: str = r"Final Answer\s*:\s*(.*)",

92

prompt_node_response: str = "",

93

):

94

"""

95

A step in the Agent's reasoning process.

96

97

Args:

98

current_step: Current step number

99

max_steps: Maximum allowed steps

100

final_answer_pattern: Pattern to detect final answers

101

prompt_node_response: LLM response for this step

102

"""

103

104

@property

105

def is_last(self) -> bool:

106

"""Check if this is the final step."""

107

108

def extract_final_answer(self, text: str) -> Optional[str]:

109

"""Extract final answer from text using pattern."""

110

```

111

112

## Memory System

113

114

### Memory Base Class

115

116

```python { .api }

117

class Memory:

118

def load(self, keys: Optional[List[str]] = None, **kwargs) -> str:

119

"""Load memory content for the given keys."""

120

121

def save(self, data: Dict[str, Any]) -> None:

122

"""Save data to memory."""

123

124

def clear(self) -> None:

125

"""Clear all memory content."""

126

```

127

128

### ConversationMemory

129

130

Simple memory that stores conversation history as a list of messages.

131

132

```python { .api }

133

class ConversationMemory(Memory):

134

def __init__(self, window_size: Optional[int] = None):

135

"""

136

Create conversation memory with optional sliding window.

137

138

Args:

139

window_size: Maximum number of conversation turns to remember

140

"""

141

```

142

143

### ConversationSummaryMemory

144

145

Memory that summarizes conversation history to stay within token limits.

146

147

```python { .api }

148

class ConversationSummaryMemory(Memory):

149

def __init__(

150

self,

151

prompt_node: PromptNode,

152

summary_frequency: int = 3,

153

prompt_template: Optional[str] = None

154

):

155

"""

156

Create summarizing conversation memory.

157

158

Args:

159

prompt_node: PromptNode for generating summaries

160

summary_frequency: How often to summarize (in conversation turns)

161

prompt_template: Custom template for summarization

162

"""

163

```

164

165

### NoMemory

166

167

Memory implementation that stores nothing (stateless agent).

168

169

```python { .api }

170

class NoMemory(Memory):

171

def __init__(self):

172

"""Create a memory system that stores nothing."""

173

```

174

175

## Usage Examples

176

177

### Basic Agent with Tools

178

179

```python

180

from haystack import Pipeline, Document

181

from haystack.agents import Agent, Tool

182

from haystack.nodes import PromptNode, BM25Retriever

183

from haystack.document_stores import InMemoryDocumentStore

184

185

# Create document store and retriever

186

doc_store = InMemoryDocumentStore()

187

doc_store.write_documents([

188

Document(content="Python is a programming language."),

189

Document(content="Machine learning uses algorithms to find patterns.")

190

])

191

retriever = BM25Retriever(document_store=doc_store)

192

193

# Create search tool

194

search_tool = Tool(

195

name="DocumentSearch",

196

pipeline_or_node=retriever,

197

description="Useful for finding information about programming and ML concepts"

198

)

199

200

# Create agent with OpenAI

201

agent = Agent(

202

prompt_node=PromptNode(model_name_or_path="gpt-3.5-turbo", api_key="your-key"),

203

max_steps=5

204

)

205

agent.add_tool(search_tool)

206

207

# Use agent

208

result = agent.run("What is Python and how is it used in machine learning?")

209

print(result.final_answer)

210

```

211

212

### Agent with Memory

213

214

```python

215

from haystack.agents import Agent, Tool

216

from haystack.agents.memory import ConversationMemory

217

from haystack.nodes import PromptNode

218

219

# Create agent with conversation memory

220

agent = Agent(

221

prompt_node=PromptNode(model_name_or_path="gpt-3.5-turbo", api_key="your-key"),

222

memory=ConversationMemory(window_size=10),

223

max_steps=6

224

)

225

226

# Add calculator tool

227

def calculator(expression: str) -> str:

228

try:

229

return str(eval(expression))

230

except:

231

return "Invalid expression"

232

233

calc_tool = Tool(

234

name="Calculator",

235

pipeline_or_node=calculator,

236

description="Useful for mathematical calculations"

237

)

238

agent.add_tool(calc_tool)

239

240

# Multi-turn conversation

241

result1 = agent.run("What is 15 * 7?")

242

print(result1.final_answer) # "105"

243

244

result2 = agent.run("Add 23 to that result")

245

print(result2.final_answer) # "128" (remembers previous calculation)

246

```

247

248

### Agent with Pipeline Tools

249

250

```python

251

from haystack.agents import Agent, Tool

252

from haystack.pipelines import ExtractiveQAPipeline

253

from haystack.nodes import PromptNode, FARMReader, BM25Retriever

254

255

# Create QA pipeline

256

qa_pipeline = ExtractiveQAPipeline(

257

reader=FARMReader("deepset/roberta-base-squad2"),

258

retriever=BM25Retriever(document_store=doc_store)

259

)

260

261

# Create QA tool

262

qa_tool = Tool(

263

name="QuestionAnswering",

264

pipeline_or_node=qa_pipeline,

265

description="Useful for answering specific questions about documents"

266

)

267

268

# Create generation tool

269

generator = PromptNode(model_name_or_path="gpt-3.5-turbo", api_key="your-key")

270

gen_tool = Tool(

271

name="TextGenerator",

272

pipeline_or_node=generator,

273

description="Useful for generating creative text or explanations"

274

)

275

276

# Create multi-tool agent

277

agent = Agent(

278

prompt_node=PromptNode(model_name_or_path="gpt-3.5-turbo", api_key="your-key"),

279

max_steps=8

280

)

281

agent.add_tool(qa_tool)

282

agent.add_tool(gen_tool)

283

284

# Complex query requiring multiple tools

285

result = agent.run("Find information about neural networks and then write a simple explanation for beginners")

286

print(result.final_answer)

287

```

288

289

### Custom Agent with Streaming

290

291

```python

292

from haystack.agents import Agent, Tool

293

from haystack.agents.memory import ConversationSummaryMemory

294

from haystack.nodes import PromptNode

295

296

# Create agent with summary memory and streaming

297

prompt_node = PromptNode(model_name_or_path="gpt-3.5-turbo", api_key="your-key")

298

memory = ConversationSummaryMemory(prompt_node=prompt_node, summary_frequency=5)

299

300

agent = Agent(

301

prompt_node=prompt_node,

302

memory=memory,

303

streaming=True, # Stream responses

304

max_steps=10

305

)

306

307

# Add tools and run with streaming output

308

web_search_tool = Tool(

309

name="WebSearch",

310

pipeline_or_node=web_search_pipeline, # Assume this exists

311

description="Search the web for current information"

312

)

313

agent.add_tool(web_search_tool)

314

315

# The agent will stream its reasoning process

316

result = agent.run("What are the latest developments in quantum computing?")

317

```

318

319

## Types

320

321

```python { .api }

322

from typing import Optional, Union, Dict, Any, List, Callable

323

from enum import Enum

324

325

class Color(Enum):

326

"""Colors for logging tool outputs."""

327

BLACK = "black"

328

RED = "red"

329

GREEN = "green"

330

YELLOW = "yellow"

331

BLUE = "blue"

332

MAGENTA = "magenta"

333

CYAN = "cyan"

334

WHITE = "white"

335

336

class ToolsManager:

337

"""Manager for organizing and accessing agent tools."""

338

def __init__(self):

339

self.tools: Dict[str, Tool] = {}

340

341

def add_tool(self, tool: Tool) -> None:

342

"""Add a tool to the manager."""

343

344

def get_tool_names(self) -> List[str]:

345

"""Get names of all registered tools."""

346

347

def get_tool_names_with_descriptions(self) -> str:

348

"""Get formatted string of tool names and descriptions."""

349

```