0
# Agents & Workflows
1
2
Multi-agent systems and workflow orchestration for complex reasoning tasks, tool usage, and multi-step problem solving with event-driven execution.
3
4
## Capabilities
5
6
### Base Agent System
7
8
Core agent classes that combine workflow execution with LLM interactions for tool-calling and reasoning tasks.
9
10
```python { .api }
11
class BaseWorkflowAgent:
12
"""
13
Base class for all workflow-based agents.
14
15
Args:
16
name: Agent identifier
17
description: Agent capabilities description
18
system_prompt: System prompt for LLM interactions
19
tools: List of tools available to agent
20
llm: Language model instance
21
initial_state: Initial workflow state
22
streaming: Enable streaming responses
23
output_cls: Structured output class
24
"""
25
def __init__(
26
self,
27
name="Agent",
28
description="An agent that can perform a task",
29
system_prompt=None,
30
tools=None,
31
llm=None,
32
initial_state=None,
33
streaming=True,
34
output_cls=None,
35
**kwargs
36
): ...
37
38
def run(
39
self,
40
user_msg=None,
41
chat_history=None,
42
memory=None,
43
max_iterations=None,
44
**kwargs
45
):
46
"""
47
Execute agent workflow.
48
49
Args:
50
user_msg: User input message
51
chat_history: Previous conversation history
52
memory: Memory instance for persistence
53
max_iterations: Maximum reasoning steps
54
55
Returns:
56
WorkflowHandler: Async handler for results and streaming
57
"""
58
59
async def get_tools(self, input_str=None):
60
"""Get available tools, optionally filtered by input."""
61
```
62
63
### ReAct Agent
64
65
Reasoning-Action agent implementing the ReAct pattern for systematic problem solving with tool usage.
66
67
```python { .api }
68
class ReActAgent(BaseWorkflowAgent):
69
"""
70
ReAct agent with thought-action-observation loop.
71
72
Args:
73
reasoning_key: Context key for storing current reasoning
74
output_parser: Parser for ReAct format responses
75
formatter: Chat message formatter
76
**kwargs: BaseWorkflowAgent arguments
77
"""
78
def __init__(
79
self,
80
reasoning_key="current_reasoning",
81
output_parser=None,
82
formatter=None,
83
**kwargs
84
): ...
85
```
86
87
**Usage Example:**
88
89
```python
90
from llama_index.core.agent import ReActAgent
91
from llama_index.core.tools import FunctionTool
92
93
def add(a: int, b: int) -> int:
94
"""Add two numbers."""
95
return a + b
96
97
def search_web(query: str) -> str:
98
"""Search the web for information."""
99
return f"Search results for {query}"
100
101
agent = ReActAgent(
102
name="research_assistant",
103
description="Performs research and calculations",
104
system_prompt="You are a helpful research assistant.",
105
tools=[
106
FunctionTool.from_defaults(fn=add),
107
FunctionTool.from_defaults(fn=search_web),
108
],
109
llm=llm
110
)
111
112
# Execute with automatic thought-action-observation loop
113
result = await agent.run("Search for Python tutorials and count how many you find")
114
print(result.response.content)
115
```
116
117
### Function Calling Agent
118
119
Function calling agent optimized for LLMs with native function calling support, enabling parallel tool execution.
120
121
```python { .api }
122
class FunctionAgent(BaseWorkflowAgent):
123
"""
124
Function calling agent for function-calling LLMs.
125
126
Args:
127
scratchpad_key: Context key for scratchpad storage
128
initial_tool_choice: Force initial tool selection
129
allow_parallel_tool_calls: Enable parallel tool execution
130
**kwargs: BaseWorkflowAgent arguments
131
"""
132
def __init__(
133
self,
134
scratchpad_key="scratchpad",
135
initial_tool_choice=None,
136
allow_parallel_tool_calls=True,
137
**kwargs
138
): ...
139
```
140
141
**Usage Example:**
142
143
```python
144
from llama_index.core.agent import FunctionAgent
145
from llama_index.llms.openai import OpenAI
146
147
def get_weather(location: str) -> str:
148
"""Get weather for a location."""
149
return f"The weather in {location} is sunny"
150
151
def get_news(topic: str) -> str:
152
"""Get news about a topic."""
153
return f"Latest news about {topic}"
154
155
agent = FunctionAgent(
156
name="information_agent",
157
description="Provides weather and news information",
158
tools=[
159
FunctionTool.from_defaults(fn=get_weather),
160
FunctionTool.from_defaults(fn=get_news),
161
],
162
llm=OpenAI(model="gpt-4"), # Function calling LLM required
163
allow_parallel_tool_calls=True
164
)
165
166
result = await agent.run("What's the weather in NYC and latest tech news?")
167
```
168
169
### Code Execution Agent
170
171
Agent capable of generating and executing Python code within a controlled environment for data analysis and computation.
172
173
```python { .api }
174
class CodeActAgent(BaseWorkflowAgent):
175
"""
176
Agent that can execute code within <execute> tags.
177
178
Args:
179
code_execute_fn: Function to execute generated code
180
scratchpad_key: Context key for scratchpad storage
181
code_act_system_prompt: System prompt for code generation
182
**kwargs: BaseWorkflowAgent arguments
183
"""
184
def __init__(
185
self,
186
code_execute_fn,
187
scratchpad_key="scratchpad",
188
code_act_system_prompt=None,
189
**kwargs
190
): ...
191
```
192
193
**Usage Example:**
194
195
```python
196
import subprocess
197
import tempfile
198
import os
199
200
async def code_execute_fn(code: str) -> Dict[str, Any]:
201
"""Execute Python code and return results."""
202
try:
203
with tempfile.NamedTemporaryFile(mode='w', suffix='.py', delete=False) as f:
204
f.write(code)
205
temp_file = f.name
206
207
result = subprocess.run(
208
['python', temp_file],
209
capture_output=True,
210
text=True,
211
timeout=30
212
)
213
214
os.unlink(temp_file)
215
216
return {
217
'stdout': result.stdout,
218
'stderr': result.stderr,
219
'returncode': result.returncode
220
}
221
except Exception as e:
222
return {'error': str(e)}
223
224
agent = CodeActAgent(
225
code_execute_fn=code_execute_fn,
226
name="data_analyst",
227
description="Performs data analysis using Python",
228
system_prompt="You can execute Python code to analyze data and solve problems.",
229
llm=llm
230
)
231
232
result = await agent.run("Calculate the mean and standard deviation of [1, 2, 3, 4, 5]")
233
```
234
235
### Multi-Agent Workflow
236
237
Orchestration system for coordinating multiple specialized agents with handoff capabilities and shared state management.
238
239
```python { .api }
240
class AgentWorkflow:
241
"""
242
Multi-agent workflow with handoff support.
243
244
Args:
245
agents: List of participating agents
246
initial_state: Initial workflow state
247
root_agent: Starting agent name
248
handoff_prompt: Prompt for handoff decisions
249
output_cls: Structured output class
250
"""
251
def __init__(
252
self,
253
agents,
254
initial_state=None,
255
root_agent=None,
256
handoff_prompt=None,
257
output_cls=None,
258
**kwargs
259
): ...
260
261
def run(self, user_msg=None, **kwargs):
262
"""Execute multi-agent workflow with automatic handoffs."""
263
264
@classmethod
265
def from_tools_or_functions(
266
cls,
267
tools_or_functions,
268
llm,
269
system_prompt=None,
270
output_cls=None,
271
**kwargs
272
):
273
"""Create workflow from tools, automatically selecting agent type."""
274
```
275
276
**Usage Example:**
277
278
```python
279
from llama_index.core.agent import AgentWorkflow, ReActAgent, FunctionAgent
280
281
# Define specialized agents
282
calculator_agent = ReActAgent(
283
name="calculator",
284
description="Performs arithmetic operations",
285
tools=[add_tool, subtract_tool, multiply_tool],
286
llm=llm
287
)
288
289
research_agent = FunctionAgent(
290
name="researcher",
291
description="Searches for information online",
292
tools=[web_search_tool, wikipedia_tool],
293
llm=llm,
294
can_handoff_to=["calculator"] # Can hand off to calculator
295
)
296
297
writer_agent = FunctionAgent(
298
name="writer",
299
description="Writes and formats content",
300
tools=[format_text_tool],
301
llm=llm
302
)
303
304
# Create multi-agent workflow
305
workflow = AgentWorkflow(
306
agents=[research_agent, calculator_agent, writer_agent],
307
root_agent="researcher", # Starting agent
308
initial_state={"document_type": "report"}
309
)
310
311
# Execute with automatic handoffs
312
result = await workflow.run(
313
"Research the GDP of top 5 countries, calculate their average, and write a summary"
314
)
315
```
316
317
### Event System
318
319
Event-driven execution model for workflow communication and state management.
320
321
```python { .api }
322
class Event:
323
"""Base event class for workflow communication."""
324
325
class StartEvent(Event):
326
"""Workflow entry point - accepts arbitrary attributes."""
327
328
class StopEvent(Event):
329
"""
330
Workflow termination with result.
331
332
Args:
333
result: Final workflow result
334
"""
335
result: Any
336
337
class AgentInput(Event):
338
"""
339
LLM input to agent.
340
341
Args:
342
input: List of chat messages
343
current_agent_name: Active agent identifier
344
"""
345
input: List[ChatMessage]
346
current_agent_name: str
347
348
class AgentOutput(Event):
349
"""
350
Agent response output.
351
352
Args:
353
response: Agent response message
354
structured_response: Structured output data
355
current_agent_name: Active agent identifier
356
tool_calls: List of tool calls made
357
"""
358
response: ChatMessage
359
structured_response: Optional[Dict[str, Any]]
360
current_agent_name: str
361
tool_calls: List[ToolSelection]
362
363
class ToolCall(Event):
364
"""
365
Tool execution request.
366
367
Args:
368
tool_name: Name of tool to execute
369
tool_kwargs: Tool arguments
370
tool_id: Unique tool call identifier
371
"""
372
tool_name: str
373
tool_kwargs: dict
374
tool_id: str
375
376
class ToolCallResult(Event):
377
"""
378
Tool execution result.
379
380
Args:
381
tool_name: Name of executed tool
382
tool_output: Tool execution output
383
return_direct: Whether to return result directly
384
"""
385
tool_name: str
386
tool_output: ToolOutput
387
return_direct: bool
388
```
389
390
### Tool Integration
391
392
Comprehensive tool integration patterns for extending agent capabilities with custom functions and external services.
393
394
```python { .api }
395
class FunctionTool:
396
"""Convert functions to agent tools."""
397
398
@classmethod
399
def from_defaults(
400
cls,
401
fn,
402
name=None,
403
description=None,
404
return_direct=False,
405
**kwargs
406
):
407
"""
408
Create tool from function.
409
410
Args:
411
fn: Function to convert to tool
412
name: Optional custom tool name
413
description: Optional custom description
414
return_direct: Return result without further processing
415
"""
416
417
class ToolSpec:
418
"""Base class for multi-function tool specifications."""
419
420
spec_functions: List[str] # Function names to expose
421
422
def to_tool_list(self):
423
"""Convert specification to tool list."""
424
```
425
426
**Tool Usage Example:**
427
428
```python
429
from llama_index.core.tools import FunctionTool, ToolSpec
430
431
# Simple function tool
432
def search_web(query: str, max_results: int = 5) -> str:
433
"""Search the web for information."""
434
return f"Search results for {query}"
435
436
search_tool = FunctionTool.from_defaults(
437
fn=search_web,
438
name="web_search",
439
description="Search the web for current information"
440
)
441
442
# Tool specification for related functions
443
class WeatherToolSpec(ToolSpec):
444
"""Weather-related tools."""
445
446
spec_functions = ["get_current_weather", "get_forecast"]
447
448
def __init__(self, api_key: str):
449
self.api_key = api_key
450
451
def get_current_weather(self, location: str) -> str:
452
"""Get current weather for location."""
453
return f"Current weather in {location}"
454
455
def get_forecast(self, location: str, days: int = 3) -> str:
456
"""Get weather forecast."""
457
return f"Forecast for {location} for {days} days"
458
459
# Use tools with agent
460
weather_spec = WeatherToolSpec(api_key="your-key")
461
agent = FunctionAgent(
462
tools=[search_tool] + weather_spec.to_tool_list(),
463
llm=llm
464
)
465
```
466
467
### Memory and State Management
468
469
Persistent memory and state management for maintaining context across agent interactions and workflow executions.
470
471
```python { .api }
472
class BaseMemory:
473
"""Base class for agent memory systems."""
474
475
def get_all(self):
476
"""Get all stored messages."""
477
478
def put(self, message):
479
"""Store a message in memory."""
480
481
def reset(self):
482
"""Clear all stored messages."""
483
484
class ChatMemoryBuffer(BaseMemory):
485
"""
486
Token-limited chat memory buffer.
487
488
Args:
489
token_limit: Maximum tokens to store
490
llm: LLM for token counting
491
"""
492
def __init__(self, token_limit=4000, llm=None): ...
493
494
class VectorMemory(BaseMemory):
495
"""
496
Vector-based semantic memory.
497
498
Args:
499
vector_index: Vector index for storage
500
retriever_kwargs: Arguments for retriever
501
"""
502
def __init__(self, vector_index=None, retriever_kwargs=None): ...
503
```
504
505
**Memory Usage Example:**
506
507
```python
508
from llama_index.core.memory import ChatMemoryBuffer
509
510
# Create persistent memory
511
memory = ChatMemoryBuffer.from_defaults(
512
token_limit=4000,
513
llm=llm
514
)
515
516
# Use across multiple interactions
517
agent = ReActAgent(tools=tools, llm=llm)
518
519
# First conversation
520
result1 = await agent.run(
521
user_msg="My name is Alice and I like Python programming",
522
memory=memory
523
)
524
525
# Second conversation - memory persists
526
result2 = await agent.run(
527
user_msg="What is my name and what do I like?",
528
memory=memory # Same memory instance
529
)
530
```
531
532
### Streaming and Real-time Processing
533
534
Event streaming capabilities for real-time monitoring of agent execution and intermediate results.
535
536
```python { .api }
537
class AgentStream(Event):
538
"""
539
Streaming response from agent.
540
541
Args:
542
delta: New content chunk
543
response: Full response so far
544
current_agent_name: Active agent
545
tool_calls: Tool calls in progress
546
"""
547
delta: str
548
response: str
549
current_agent_name: str
550
tool_calls: List[ToolSelection]
551
552
class WorkflowHandler:
553
"""Handler for workflow execution with streaming support."""
554
555
def stream_events(self):
556
"""Stream events as they occur during execution."""
557
558
async def __aenter__(self):
559
"""Async context manager entry."""
560
561
async def __aexit__(self, exc_type, exc_val, exc_tb):
562
"""Async context manager exit."""
563
```
564
565
**Streaming Example:**
566
567
```python
568
# Enable streaming
569
agent = FunctionAgent(
570
tools=tools,
571
llm=llm,
572
streaming=True
573
)
574
575
# Execute with streaming
576
handler = agent.run("Analyze this complex dataset...")
577
578
# Stream events in real-time
579
async for event in handler.stream_events():
580
if isinstance(event, AgentStream):
581
print(f"Agent thinking: {event.delta}")
582
elif isinstance(event, ToolCall):
583
print(f"Using tool: {event.tool_name}")
584
elif isinstance(event, ToolCallResult):
585
print(f"Tool result: {event.tool_output.content}")
586
587
# Get final result
588
final_result = await handler
589
print(f"Final answer: {final_result.response.content}")
590
```
591
592
### Error Handling and Retry
593
594
Built-in error handling and retry mechanisms for robust agent execution with automatic recovery from parsing and execution errors.
595
596
```python { .api }
597
class WorkflowRuntimeError(Exception):
598
"""Exception raised during workflow execution."""
599
600
# Agent automatically retries on parsing errors
601
# Maximum iterations prevent infinite loops
602
agent = ReActAgent(tools=tools, llm=llm)
603
604
try:
605
result = await agent.run(
606
user_msg="Complex multi-step task",
607
max_iterations=50 # Prevent infinite reasoning loops
608
)
609
except WorkflowRuntimeError as e:
610
print(f"Agent execution failed: {e}")
611
```