0
# Agents & Tools
1
2
Agent implementations supporting ReAct reasoning, function calling, and workflow orchestration with comprehensive tool integration. The agent framework enables autonomous task execution through iterative reasoning, tool usage, and multi-step problem solving.
3
4
## Capabilities
5
6
### Base Agent Interfaces
7
8
Foundation interfaces for agent implementations with standardized interaction patterns and execution flows.
9
10
```python { .api }
11
class BaseWorkflowAgent:
12
"""
13
Base interface for workflow-based agent implementations.
14
15
Workflow agents coordinate complex multi-step processes using
16
structured workflows and event-driven execution patterns.
17
"""
18
def __init__(self, **kwargs): ...
19
20
def run(self, input_data: Any, **kwargs) -> Any:
21
"""
22
Execute agent workflow with input data.
23
24
Parameters:
25
- input_data: Any, input data for workflow execution
26
- **kwargs: additional execution parameters
27
28
Returns:
29
- Any, workflow execution result
30
"""
31
32
def stream_run(self, input_data: Any, **kwargs) -> Iterator[Any]:
33
"""
34
Stream workflow execution results.
35
36
Parameters:
37
- input_data: Any, input data for workflow execution
38
- **kwargs: additional execution parameters
39
40
Returns:
41
- Iterator[Any], streaming workflow results
42
"""
43
```
44
45
### ReAct Agent Implementation
46
47
ReAct (Reason + Act) agent that combines reasoning and action in iterative cycles for complex problem solving.
48
49
```python { .api }
50
class ReActAgent:
51
"""
52
ReAct (Reason + Act) agent implementation for iterative reasoning and action.
53
54
The ReAct pattern combines reasoning traces and task-specific actions,
55
allowing the agent to dynamically plan, act, and observe in cycles.
56
57
Parameters:
58
- tools: List[BaseTool], available tools for the agent
59
- llm: LLM, language model for reasoning and planning
60
- memory: Optional[BaseMemory], memory system for conversation history
61
- max_iterations: int, maximum number of reasoning iterations
62
- react_chat_formatter: Optional[ReActChatFormatter], formatter for ReAct messages
63
- output_parser: Optional[ReActOutputParser], parser for ReAct output
64
- callback_manager: Optional[CallbackManager], callback management system
65
- verbose: bool, whether to enable verbose logging
66
"""
67
def __init__(
68
self,
69
tools: List[BaseTool],
70
llm: LLM,
71
memory: Optional[BaseMemory] = None,
72
max_iterations: int = 10,
73
react_chat_formatter: Optional[ReActChatFormatter] = None,
74
output_parser: Optional[ReActOutputParser] = None,
75
callback_manager: Optional[CallbackManager] = None,
76
verbose: bool = False,
77
**kwargs
78
): ...
79
80
def reset(self) -> None:
81
"""Reset agent state and clear memory."""
82
83
def chat(
84
self,
85
message: str,
86
chat_history: Optional[List[ChatMessage]] = None,
87
tool_choice: Optional[Union[str, dict]] = None,
88
**kwargs
89
) -> AgentChatResponse:
90
"""
91
Execute chat interaction with ReAct reasoning.
92
93
Parameters:
94
- message: str, user message or query
95
- chat_history: Optional[List[ChatMessage]], conversation history
96
- tool_choice: Optional[Union[str, dict]], tool selection preference
97
- **kwargs: additional chat parameters
98
99
Returns:
100
- AgentChatResponse, agent response with reasoning trace
101
"""
102
103
def stream_chat(
104
self,
105
message: str,
106
chat_history: Optional[List[ChatMessage]] = None,
107
tool_choice: Optional[Union[str, dict]] = None,
108
**kwargs
109
) -> StreamingAgentChatResponse:
110
"""
111
Stream chat interaction with ReAct reasoning.
112
113
Parameters:
114
- message: str, user message or query
115
- chat_history: Optional[List[ChatMessage]], conversation history
116
- tool_choice: Optional[Union[str, dict]], tool selection preference
117
- **kwargs: additional chat parameters
118
119
Returns:
120
- StreamingAgentChatResponse, streaming agent response
121
"""
122
```
123
124
### Function Calling Agent
125
126
Agent implementation optimized for function calling and tool execution with structured reasoning.
127
128
```python { .api }
129
class FunctionAgent:
130
"""
131
Function-calling agent optimized for structured tool usage and execution.
132
133
Function agents excel at using well-defined tools and APIs to accomplish
134
tasks through structured function calls and parameter passing.
135
136
Parameters:
137
- tools: List[BaseTool], available tools for function calling
138
- llm: LLM, language model supporting function calling
139
- system_prompt: Optional[str], system prompt for agent behavior
140
- max_function_calls: int, maximum function calls per interaction
141
- callback_manager: Optional[CallbackManager], callback management
142
- verbose: bool, whether to enable verbose logging
143
"""
144
def __init__(
145
self,
146
tools: List[BaseTool],
147
llm: LLM,
148
system_prompt: Optional[str] = None,
149
max_function_calls: int = 5,
150
callback_manager: Optional[CallbackManager] = None,
151
verbose: bool = False,
152
**kwargs
153
): ...
154
155
def chat(
156
self,
157
message: str,
158
chat_history: Optional[List[ChatMessage]] = None,
159
**kwargs
160
) -> AgentChatResponse:
161
"""Execute function-calling chat interaction."""
162
163
def stream_chat(
164
self,
165
message: str,
166
chat_history: Optional[List[ChatMessage]] = None,
167
**kwargs
168
) -> StreamingAgentChatResponse:
169
"""Stream function-calling chat interaction."""
170
```
171
172
### Code Action Agent
173
174
Specialized agent for code generation, execution, and iterative development tasks.
175
176
```python { .api }
177
class CodeActAgent:
178
"""
179
Code action agent for code generation, execution, and debugging.
180
181
CodeAct agents can write, execute, and iterate on code to solve
182
programming tasks and data analysis problems.
183
184
Parameters:
185
- tools: List[BaseTool], tools including code execution environment
186
- llm: LLM, language model for code generation
187
- system_prompt: Optional[str], system prompt for coding behavior
188
- max_iterations: int, maximum code-action iterations
189
- callback_manager: Optional[CallbackManager], callback management
190
- verbose: bool, whether to enable verbose logging
191
"""
192
def __init__(
193
self,
194
tools: List[BaseTool],
195
llm: LLM,
196
system_prompt: Optional[str] = None,
197
max_iterations: int = 10,
198
callback_manager: Optional[CallbackManager] = None,
199
verbose: bool = False,
200
**kwargs
201
): ...
202
```
203
204
### Agent Workflow System
205
206
Multi-agent workflow orchestration for complex, distributed problem solving.
207
208
```python { .api }
209
class AgentWorkflow:
210
"""
211
Multi-agent workflow system for orchestrating complex agent interactions.
212
213
Agent workflows coordinate multiple agents, manage state transitions,
214
and handle complex multi-step processes requiring different agent capabilities.
215
216
Parameters:
217
- agents: Dict[str, BaseAgent], named agents in the workflow
218
- workflow_definition: dict, workflow structure and execution flow
219
- state_manager: Optional[StateManager], state management system
220
- callback_manager: Optional[CallbackManager], callback management
221
"""
222
def __init__(
223
self,
224
agents: Dict[str, BaseAgent],
225
workflow_definition: dict,
226
state_manager: Optional[StateManager] = None,
227
callback_manager: Optional[CallbackManager] = None,
228
**kwargs
229
): ...
230
231
def run(
232
self,
233
initial_input: Any,
234
**kwargs
235
) -> Any:
236
"""
237
Execute multi-agent workflow.
238
239
Parameters:
240
- initial_input: Any, initial input to workflow
241
- **kwargs: additional workflow parameters
242
243
Returns:
244
- Any, workflow execution result
245
"""
246
```
247
248
### ReAct Components
249
250
Specialized components for ReAct pattern implementation and output processing.
251
252
```python { .api }
253
class ReActOutputParser:
254
"""
255
Parser for ReAct agent output processing and action extraction.
256
257
Parses ReAct-formatted output to extract reasoning steps, actions,
258
and observations for agent execution cycles.
259
260
Parameters:
261
- output_format: str, expected output format (json, text, structured)
262
"""
263
def __init__(self, output_format: str = "text"): ...
264
265
def parse(self, output: str) -> Dict[str, Any]:
266
"""
267
Parse ReAct output into structured components.
268
269
Parameters:
270
- output: str, raw ReAct output from LLM
271
272
Returns:
273
- Dict[str, Any], parsed components (thought, action, observation)
274
"""
275
276
def format_action(self, action: str, action_input: str) -> str:
277
"""
278
Format action for ReAct execution.
279
280
Parameters:
281
- action: str, action name
282
- action_input: str, action parameters
283
284
Returns:
285
- str, formatted action string
286
"""
287
288
class ReActChatFormatter:
289
"""
290
Formatter for ReAct chat messages and conversation structure.
291
292
Formats chat messages to support ReAct reasoning pattern with
293
proper structure for thought, action, and observation cycles.
294
295
Parameters:
296
- system_prompt: Optional[str], system prompt for ReAct behavior
297
- context_separator: str, separator for context sections
298
"""
299
def __init__(
300
self,
301
system_prompt: Optional[str] = None,
302
context_separator: str = "\\n\\n"
303
): ...
304
305
def format(
306
self,
307
tools: List[BaseTool],
308
chat_history: List[ChatMessage],
309
current_reasoning: Optional[str] = None
310
) -> List[ChatMessage]:
311
"""
312
Format messages for ReAct interaction.
313
314
Parameters:
315
- tools: List[BaseTool], available tools
316
- chat_history: List[ChatMessage], conversation history
317
- current_reasoning: Optional[str], current reasoning trace
318
319
Returns:
320
- List[ChatMessage], formatted messages for ReAct
321
"""
322
```
323
324
### Agent Response Types
325
326
Response structures for agent interactions with comprehensive metadata and source tracking.
327
328
```python { .api }
329
class AgentChatResponse:
330
"""
331
Response from agent chat interaction with reasoning trace and metadata.
332
333
Parameters:
334
- response: str, main response text
335
- sources: Optional[List[ToolOutput]], tools and sources used
336
- source_nodes: Optional[List[NodeWithScore]], retrieved source nodes
337
- reasoning: Optional[str], agent reasoning trace
338
- tool_calls: Optional[List[ToolCall]], tool calls made by agent
339
"""
340
def __init__(
341
self,
342
response: str,
343
sources: Optional[List[ToolOutput]] = None,
344
source_nodes: Optional[List[NodeWithScore]] = None,
345
reasoning: Optional[str] = None,
346
tool_calls: Optional[List[ToolCall]] = None,
347
**kwargs
348
): ...
349
350
@property
351
def response_txt(self) -> str:
352
"""Get response text."""
353
354
def __str__(self) -> str:
355
"""String representation of response."""
356
357
class StreamingAgentChatResponse:
358
"""
359
Streaming response from agent chat interaction.
360
361
Parameters:
362
- response_gen: Iterator[str], response text generator
363
- sources: Optional[List[ToolOutput]], tools and sources used
364
- source_nodes: Optional[List[NodeWithScore]], retrieved source nodes
365
- reasoning: Optional[str], agent reasoning trace
366
"""
367
def __init__(
368
self,
369
response_gen: Iterator[str],
370
sources: Optional[List[ToolOutput]] = None,
371
source_nodes: Optional[List[NodeWithScore]] = None,
372
reasoning: Optional[str] = None,
373
**kwargs
374
): ...
375
376
def response_gen(self) -> Iterator[str]:
377
"""Get response generator."""
378
```
379
380
### Agent Event System
381
382
Event-driven system for agent communication and workflow coordination.
383
384
```python { .api }
385
class AgentInput:
386
"""
387
Agent input event for workflow communication.
388
389
Parameters:
390
- input_data: Any, input data for agent processing
391
- metadata: Optional[dict], additional input metadata
392
"""
393
def __init__(
394
self,
395
input_data: Any,
396
metadata: Optional[dict] = None
397
): ...
398
399
class AgentOutput:
400
"""
401
Agent output event for workflow results.
402
403
Parameters:
404
- output_data: Any, output data from agent processing
405
- metadata: Optional[dict], additional output metadata
406
- tool_calls: Optional[List[ToolCall]], tool calls made
407
"""
408
def __init__(
409
self,
410
output_data: Any,
411
metadata: Optional[dict] = None,
412
tool_calls: Optional[List[ToolCall]] = None
413
): ...
414
415
class AgentStream:
416
"""
417
Agent streaming event for continuous output.
418
419
Parameters:
420
- stream_data: Iterator[Any], streaming data
421
- metadata: Optional[dict], stream metadata
422
"""
423
def __init__(
424
self,
425
stream_data: Iterator[Any],
426
metadata: Optional[dict] = None
427
): ...
428
429
class ToolCall:
430
"""
431
Tool call event representing tool usage by agent.
432
433
Parameters:
434
- tool_name: str, name of the tool called
435
- tool_input: dict, parameters passed to tool
436
- tool_id: Optional[str], unique identifier for tool call
437
"""
438
def __init__(
439
self,
440
tool_name: str,
441
tool_input: dict,
442
tool_id: Optional[str] = None
443
): ...
444
445
class ToolCallResult:
446
"""
447
Result of tool call execution.
448
449
Parameters:
450
- tool_call: ToolCall, original tool call
451
- result: Any, tool execution result
452
- error: Optional[str], error message if tool call failed
453
"""
454
def __init__(
455
self,
456
tool_call: ToolCall,
457
result: Any,
458
error: Optional[str] = None
459
): ...
460
```
461
462
### Base Tool Interface
463
464
Foundation interface for all tool implementations with standardized execution methods.
465
466
```python { .api }
467
class BaseTool:
468
"""
469
Base interface for tool implementations.
470
471
Tools provide specific capabilities that agents can use to accomplish tasks,
472
from simple functions to complex API integrations.
473
474
Parameters:
475
- metadata: ToolMetadata, tool description and parameter schema
476
- fn: Optional[Callable], function implementation for the tool
477
"""
478
def __init__(
479
self,
480
metadata: ToolMetadata,
481
fn: Optional[Callable] = None,
482
**kwargs
483
): ...
484
485
def call(self, input: Any, **kwargs) -> ToolOutput:
486
"""
487
Execute tool with input parameters.
488
489
Parameters:
490
- input: Any, input data for tool execution
491
- **kwargs: additional tool parameters
492
493
Returns:
494
- ToolOutput, tool execution result
495
"""
496
497
def __call__(self, input: Any, **kwargs) -> ToolOutput:
498
"""Callable interface for tool execution."""
499
500
@property
501
def metadata(self) -> ToolMetadata:
502
"""Get tool metadata and description."""
503
504
class AsyncBaseTool(BaseTool):
505
"""
506
Base interface for asynchronous tool implementations.
507
"""
508
async def acall(self, input: Any, **kwargs) -> ToolOutput:
509
"""
510
Asynchronously execute tool with input parameters.
511
512
Parameters:
513
- input: Any, input data for tool execution
514
- **kwargs: additional tool parameters
515
516
Returns:
517
- ToolOutput, tool execution result
518
"""
519
```
520
521
### Function-Based Tools
522
523
Tools that wrap Python functions for use by agents with automatic parameter handling.
524
525
```python { .api }
526
class FunctionTool(BaseTool):
527
"""
528
Tool wrapper for Python functions with automatic parameter handling.
529
530
Parameters:
531
- fn: Callable, Python function to wrap as tool
532
- metadata: Optional[ToolMetadata], tool metadata (auto-generated if None)
533
- async_fn: Optional[Callable], async version of function
534
"""
535
def __init__(
536
self,
537
fn: Callable,
538
metadata: Optional[ToolMetadata] = None,
539
async_fn: Optional[Callable] = None,
540
**kwargs
541
): ...
542
543
@classmethod
544
def from_defaults(
545
cls,
546
fn: Callable,
547
name: Optional[str] = None,
548
description: Optional[str] = None,
549
return_direct: bool = False,
550
async_fn: Optional[Callable] = None,
551
**kwargs
552
) -> "FunctionTool":
553
"""
554
Create FunctionTool with default metadata generation.
555
556
Parameters:
557
- fn: Callable, function to wrap
558
- name: Optional[str], tool name (defaults to function name)
559
- description: Optional[str], tool description (from docstring)
560
- return_direct: bool, whether to return result directly
561
- async_fn: Optional[Callable], async version of function
562
563
Returns:
564
- FunctionTool, configured function tool
565
"""
566
```
567
568
### Query Engine Tools
569
570
Tools that integrate query engines for information retrieval and question answering.
571
572
```python { .api }
573
class QueryEngineTool(BaseTool):
574
"""
575
Tool wrapper for query engines to enable agent-based information retrieval.
576
577
Parameters:
578
- query_engine: BaseQueryEngine, query engine for information retrieval
579
- metadata: ToolMetadata, tool description and usage information
580
"""
581
def __init__(
582
self,
583
query_engine: BaseQueryEngine,
584
metadata: ToolMetadata,
585
**kwargs
586
): ...
587
588
@classmethod
589
def from_defaults(
590
cls,
591
query_engine: BaseQueryEngine,
592
name: Optional[str] = None,
593
description: Optional[str] = None,
594
**kwargs
595
) -> "QueryEngineTool":
596
"""Create QueryEngineTool with default metadata."""
597
```
598
599
### Retriever Tools
600
601
Tools that wrap retrievers for agent-based information gathering and context retrieval.
602
603
```python { .api }
604
class RetrieverTool(BaseTool):
605
"""
606
Tool wrapper for retrievers to enable agent-based information gathering.
607
608
Parameters:
609
- retriever: BaseRetriever, retriever for information gathering
610
- metadata: ToolMetadata, tool description and usage information
611
"""
612
def __init__(
613
self,
614
retriever: BaseRetriever,
615
metadata: ToolMetadata,
616
**kwargs
617
): ...
618
619
@classmethod
620
def from_defaults(
621
cls,
622
retriever: BaseRetriever,
623
name: Optional[str] = None,
624
description: Optional[str] = None,
625
**kwargs
626
) -> "RetrieverTool":
627
"""Create RetrieverTool with default metadata."""
628
```
629
630
### Query Planning Tools
631
632
Advanced tools for query planning and multi-step information retrieval strategies.
633
634
```python { .api }
635
class QueryPlanTool(BaseTool):
636
"""
637
Tool for query planning and multi-step information retrieval.
638
639
Query plan tools break down complex queries into manageable steps
640
and coordinate multiple retrieval operations.
641
642
Parameters:
643
- query_engine: BaseQueryEngine, query engine for execution
644
- metadata: ToolMetadata, tool metadata and description
645
"""
646
def __init__(
647
self,
648
query_engine: BaseQueryEngine,
649
metadata: ToolMetadata,
650
**kwargs
651
): ...
652
```
653
654
### Tool Metadata & Configuration
655
656
Metadata structures for describing tool capabilities, parameters, and usage patterns.
657
658
```python { .api }
659
class ToolMetadata:
660
"""
661
Metadata describing tool capabilities and interface.
662
663
Parameters:
664
- name: str, tool name identifier
665
- description: str, human-readable tool description
666
- fn_schema: Optional[Type[BaseModel]], Pydantic schema for parameters
667
- return_direct: bool, whether tool output should be returned directly
668
"""
669
def __init__(
670
self,
671
name: str,
672
description: str,
673
fn_schema: Optional[Type[BaseModel]] = None,
674
return_direct: bool = False,
675
**kwargs
676
): ...
677
678
def to_openai_tool(self) -> dict:
679
"""Convert to OpenAI tool format."""
680
681
def get_parameters_dict(self) -> dict:
682
"""Get parameter schema as dictionary."""
683
684
class ToolOutput:
685
"""
686
Output from tool execution with content and metadata.
687
688
Parameters:
689
- content: str, main output content
690
- tool_name: str, name of tool that generated output
691
- raw_input: Optional[dict], raw input parameters
692
- raw_output: Optional[Any], raw output data
693
- is_error: bool, whether output represents an error
694
"""
695
def __init__(
696
self,
697
content: str,
698
tool_name: str,
699
raw_input: Optional[dict] = None,
700
raw_output: Optional[Any] = None,
701
is_error: bool = False,
702
**kwargs
703
): ...
704
705
def __str__(self) -> str:
706
"""String representation of tool output."""
707
```
708
709
### Tool Selection & Execution
710
711
Utilities for tool selection, execution, and result processing in agent workflows.
712
713
```python { .api }
714
class ToolSelection:
715
"""
716
Tool selection with parameters for agent execution.
717
718
Parameters:
719
- tool_id: str, identifier of selected tool
720
- tool_name: str, name of selected tool
721
- tool_kwargs: dict, parameters for tool execution
722
"""
723
def __init__(
724
self,
725
tool_id: str,
726
tool_name: str,
727
tool_kwargs: dict,
728
**kwargs
729
): ...
730
731
def call_tool_with_selection(
732
tools: List[BaseTool],
733
tool_selection: ToolSelection,
734
**kwargs
735
) -> ToolOutput:
736
"""
737
Execute tool based on selection.
738
739
Parameters:
740
- tools: List[BaseTool], available tools
741
- tool_selection: ToolSelection, selected tool and parameters
742
- **kwargs: additional execution parameters
743
744
Returns:
745
- ToolOutput, tool execution result
746
"""
747
748
async def acall_tool_with_selection(
749
tools: List[BaseTool],
750
tool_selection: ToolSelection,
751
**kwargs
752
) -> ToolOutput:
753
"""Asynchronously execute tool based on selection."""
754
755
def adapt_to_async_tool(tool: BaseTool) -> AsyncBaseTool:
756
"""
757
Convert synchronous tool to asynchronous interface.
758
759
Parameters:
760
- tool: BaseTool, synchronous tool to convert
761
762
Returns:
763
- AsyncBaseTool, asynchronous tool wrapper
764
"""
765
766
def download_tool(tool_name: str, **kwargs) -> BaseTool:
767
"""
768
Download tool from LlamaHub.
769
770
Parameters:
771
- tool_name: str, name of tool to download
772
- **kwargs: additional download parameters
773
774
Returns:
775
- BaseTool, downloaded and configured tool
776
"""
777
```
778
779
## Usage Examples
780
781
### Basic ReAct Agent
782
783
```python
784
from llama_index.core.agent import ReActAgent
785
from llama_index.core.tools import FunctionTool
786
from llama_index.core.llms import MockLLM
787
788
def multiply(a: int, b: int) -> int:
789
"""Multiply two integers."""
790
return a * b
791
792
def add(a: int, b: int) -> int:
793
"""Add two integers."""
794
return a + b
795
796
# Create function tools
797
multiply_tool = FunctionTool.from_defaults(fn=multiply)
798
add_tool = FunctionTool.from_defaults(fn=add)
799
800
# Initialize ReAct agent
801
llm = MockLLM()
802
agent = ReActAgent.from_tools(
803
tools=[multiply_tool, add_tool],
804
llm=llm,
805
verbose=True
806
)
807
808
# Use agent for multi-step reasoning
809
response = agent.chat("What is (3 * 4) + 5?")
810
print(f"Agent response: {response.response}")
811
print(f"Reasoning: {response.reasoning}")
812
```
813
814
### Query Engine Tool Integration
815
816
```python
817
from llama_index.core.tools import QueryEngineTool
818
from llama_index.core import VectorStoreIndex, Document
819
820
# Create knowledge base
821
documents = [
822
Document(text="Machine learning is a subset of artificial intelligence that focuses on algorithms."),
823
Document(text="Deep learning uses neural networks with multiple layers for complex pattern recognition."),
824
Document(text="Natural language processing enables computers to understand and generate human language.")
825
]
826
827
index = VectorStoreIndex.from_documents(documents)
828
query_engine = index.as_query_engine()
829
830
# Create query engine tool
831
knowledge_tool = QueryEngineTool.from_defaults(
832
query_engine=query_engine,
833
name="knowledge_search",
834
description="Search the knowledge base for information about AI and machine learning topics."
835
)
836
837
# Add to agent
838
agent = ReActAgent.from_tools(
839
tools=[knowledge_tool, multiply_tool, add_tool],
840
llm=llm,
841
verbose=True
842
)
843
844
# Use agent with knowledge retrieval
845
response = agent.chat("What is the relationship between machine learning and AI?")
846
print(f"Knowledge-based response: {response.response}")
847
```
848
849
### Custom Function Tool
850
851
```python
852
import json
853
from typing import Dict, Any
854
855
def analyze_data(data: str, analysis_type: str = "summary") -> str:
856
"""
857
Analyze JSON data and return insights.
858
859
Args:
860
data: JSON string containing data to analyze
861
analysis_type: Type of analysis ('summary', 'statistics', 'trends')
862
"""
863
try:
864
parsed_data = json.loads(data)
865
866
if analysis_type == "summary":
867
return f"Data contains {len(parsed_data)} items with keys: {list(parsed_data.keys()) if isinstance(parsed_data, dict) else 'list format'}"
868
elif analysis_type == "statistics":
869
if isinstance(parsed_data, list) and all(isinstance(x, (int, float)) for x in parsed_data):
870
avg = sum(parsed_data) / len(parsed_data)
871
return f"Average: {avg:.2f}, Min: {min(parsed_data)}, Max: {max(parsed_data)}"
872
else:
873
return "Statistics not available for this data type"
874
else:
875
return f"Analysis type '{analysis_type}' completed"
876
877
except json.JSONDecodeError:
878
return "Invalid JSON data provided"
879
880
# Create custom tool
881
analysis_tool = FunctionTool.from_defaults(
882
fn=analyze_data,
883
name="data_analyzer",
884
description="Analyze JSON data and provide insights including summaries and statistics"
885
)
886
887
# Use in agent
888
data_agent = ReActAgent.from_tools(
889
tools=[analysis_tool],
890
llm=llm,
891
verbose=True
892
)
893
894
# Analyze data
895
sample_data = json.dumps([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
896
response = data_agent.chat(f"Please analyze this data: {sample_data}")
897
print(response.response)
898
```
899
900
### Multi-Tool Agent Workflow
901
902
```python
903
from llama_index.core.tools import RetrieverTool
904
905
# Create retriever tool
906
retriever = index.as_retriever(similarity_top_k=3)
907
retriever_tool = RetrieverTool.from_defaults(
908
retriever=retriever,
909
name="document_retriever",
910
description="Retrieve relevant documents from the knowledge base"
911
)
912
913
def calculate_score(factors: list, weights: list = None) -> float:
914
"""Calculate weighted score from factors."""
915
if weights is None:
916
weights = [1.0] * len(factors)
917
918
if len(factors) != len(weights):
919
return 0.0
920
921
total = sum(f * w for f, w in zip(factors, weights))
922
return total / sum(weights)
923
924
score_tool = FunctionTool.from_defaults(fn=calculate_score)
925
926
# Comprehensive agent with multiple tools
927
comprehensive_agent = ReActAgent.from_tools(
928
tools=[
929
knowledge_tool,
930
retriever_tool,
931
analysis_tool,
932
score_tool,
933
multiply_tool,
934
add_tool
935
],
936
llm=llm,
937
max_iterations=15,
938
verbose=True
939
)
940
941
# Complex multi-step task
942
response = comprehensive_agent.chat(
943
"First, search for information about machine learning. "
944
"Then calculate a relevance score using factors [0.8, 0.9, 0.7] with equal weights. "
945
"Finally, multiply the score by 100 to get a percentage."
946
)
947
948
print(f"Multi-step result: {response.response}")
949
print(f"Tools used: {[source.tool_name for source in response.sources or []]}")
950
```
951
952
### Streaming Agent Interaction
953
954
```python
955
# Stream agent responses for real-time interaction
956
def stream_agent_response():
957
streaming_response = agent.stream_chat("Explain the process of neural network training step by step")
958
959
print("Streaming agent response:")
960
for chunk in streaming_response.response_gen:
961
print(chunk, end="", flush=True)
962
print("\\n\\nStreaming complete.")
963
964
# stream_agent_response()
965
```
966
967
### Agent with Memory
968
969
```python
970
from llama_index.core.memory import ChatMemoryBuffer
971
972
# Create agent with memory
973
memory = ChatMemoryBuffer.from_defaults(token_limit=2000)
974
975
memory_agent = ReActAgent.from_tools(
976
tools=[knowledge_tool, multiply_tool],
977
llm=llm,
978
memory=memory,
979
verbose=True
980
)
981
982
# Multi-turn conversation with context
983
print("=== Turn 1 ===")
984
response1 = memory_agent.chat("What is machine learning?")
985
print(response1.response)
986
987
print("\\n=== Turn 2 ===")
988
response2 = memory_agent.chat("Can you give me a specific example?")
989
print(response2.response)
990
991
print("\\n=== Turn 3 ===")
992
response3 = memory_agent.chat("How does it relate to what we discussed earlier?")
993
print(response3.response)
994
```
995
996
### Function Agent Example
997
998
```python
999
from llama_index.core.agent import FunctionAgent
1000
1001
# Function calling agent
1002
function_agent = FunctionAgent.from_tools(
1003
tools=[multiply_tool, add_tool, analysis_tool],
1004
llm=llm, # Should be function-calling capable LLM
1005
system_prompt="You are a helpful assistant that uses functions to solve problems step by step.",
1006
max_function_calls=10,
1007
verbose=True
1008
)
1009
1010
response = function_agent.chat("Calculate (5 * 3) + (2 * 4), then analyze the result")
1011
print(f"Function agent response: {response.response}")
1012
```
1013
1014
### Tool Error Handling
1015
1016
```python
1017
def risky_operation(value: int) -> str:
1018
"""Operation that might fail."""
1019
if value < 0:
1020
raise ValueError("Value must be non-negative")
1021
return f"Success: processed value {value}"
1022
1023
risky_tool = FunctionTool.from_defaults(fn=risky_operation)
1024
1025
error_handling_agent = ReActAgent.from_tools(
1026
tools=[risky_tool, multiply_tool],
1027
llm=llm,
1028
verbose=True
1029
)
1030
1031
# Test error handling
1032
response = error_handling_agent.chat("Please process the value -5 using the risky operation")
1033
print(f"Error handling result: {response.response}")
1034
```
1035
1036
## Configuration & Types
1037
1038
```python { .api }
1039
# Agent configuration
1040
class AgentType(str, Enum):
1041
REACT = "react"
1042
FUNCTION = "function"
1043
CODE_ACT = "code_act"
1044
WORKFLOW = "workflow"
1045
1046
# Tool configuration
1047
DEFAULT_MAX_FUNCTION_CALLS = 5
1048
DEFAULT_MAX_ITERATIONS = 10
1049
DEFAULT_VERBOSE = False
1050
1051
# Response streaming
1052
StreamingAgentChatResponse = Iterator[str]
1053
1054
# Memory types
1055
BaseMemory = Any # Base memory interface
1056
1057
# Agent state management
1058
class AgentState:
1059
"""Agent state for workflow coordination."""
1060
pass
1061
```