0
# Tools and Toolboxes
1
2
Function calling system with automatic schema generation, tool chaining, and error handling. This module enables Large Language Models to execute Python functions, access external systems, and perform complex multi-step operations through a structured tool interface.
3
4
## Capabilities
5
6
### Tool Creation and Management
7
8
Tools wrap Python functions to make them callable by LLMs with automatic schema generation and parameter validation.
9
10
```python { .api }
11
class Tool:
12
"""Function wrapper for LLM tool calls with schema generation."""
13
14
def __init__(
15
self,
16
function: Callable,
17
name: Optional[str] = None,
18
description: Optional[str] = None,
19
parameters: Optional[dict] = None
20
):
21
"""
22
Initialize tool.
23
24
Args:
25
function: Python function to wrap
26
name: Tool name (defaults to function name)
27
description: Tool description (from docstring if not provided)
28
parameters: JSON schema for parameters (auto-generated if not provided)
29
"""
30
31
@classmethod
32
def function(
33
cls,
34
function: Callable,
35
name: Optional[str] = None,
36
description: Optional[str] = None
37
) -> "Tool":
38
"""
39
Create tool from function with automatic schema generation.
40
41
Args:
42
function: Python function to wrap
43
name: Optional tool name override
44
description: Optional description override
45
46
Returns:
47
Tool instance
48
"""
49
50
def hash(self) -> str:
51
"""Generate stable hash for tool caching."""
52
53
name: str
54
description: str
55
parameters: dict
56
function: Callable
57
plugin: Optional[str] = None
58
59
def get_tools() -> Dict[str, Union[Tool, Type[Toolbox]]]:
60
"""Get all registered tools and toolboxes from plugins."""
61
```
62
63
### Toolbox Collections
64
65
Toolboxes organize related tools into cohesive collections with shared setup and state management.
66
67
```python { .api }
68
class Toolbox:
69
"""Base class for collections of related tools."""
70
71
def tools(self) -> Iterable[Tool]:
72
"""Return all tools in this toolbox."""
73
74
def add_tool(
75
self,
76
tool_or_function: Union[Tool, Callable],
77
pass_self: bool = False
78
):
79
"""
80
Add a tool to this toolbox.
81
82
Args:
83
tool_or_function: Tool instance or function to wrap
84
pass_self: Whether to pass toolbox instance to function
85
"""
86
87
def prepare(self):
88
"""Setup method called before toolbox use."""
89
90
async def prepare_async(self):
91
"""Async setup method called before toolbox use."""
92
93
name: Optional[str] = None
94
plugin: Optional[str] = None
95
```
96
97
### Tool Execution System
98
99
The tool execution system handles function calls, parameter validation, and result processing.
100
101
```python { .api }
102
class ToolCall:
103
"""Represents a tool function call request from an LLM."""
104
105
def __init__(self, function: str, arguments: dict, id: Optional[str] = None):
106
"""
107
Initialize tool call.
108
109
Args:
110
function: Tool function name
111
arguments: Function arguments as dict
112
id: Optional call ID for tracking
113
"""
114
115
function: str
116
arguments: dict
117
id: Optional[str]
118
119
class ToolResult:
120
"""Result of executing a tool call."""
121
122
def __init__(
123
self,
124
call: ToolCall,
125
output: Any,
126
error: Optional[str] = None,
127
attachments: Optional[List[Attachment]] = None
128
):
129
"""
130
Initialize tool result.
131
132
Args:
133
call: Original tool call
134
output: Function return value
135
error: Error message if execution failed
136
attachments: Optional result attachments
137
"""
138
139
call: ToolCall
140
output: Any
141
error: Optional[str] = None
142
attachments: Optional[List[Attachment]] = None
143
144
class ToolOutput:
145
"""Tool return value with optional attachments."""
146
147
def __init__(self, content: Any, attachments: Optional[List[Attachment]] = None):
148
"""
149
Initialize tool output.
150
151
Args:
152
content: Return value content
153
attachments: Optional file/media attachments
154
"""
155
156
content: Any
157
attachments: Optional[List[Attachment]] = None
158
```
159
160
### Tool Error Handling
161
162
Special exception for controlling tool execution flow.
163
164
```python { .api }
165
class CancelToolCall(Exception):
166
"""Exception to cancel tool execution and return message to LLM."""
167
168
def __init__(self, message: str):
169
"""
170
Initialize cancellation.
171
172
Args:
173
message: Message to return to LLM explaining cancellation
174
"""
175
self.message = message
176
```
177
178
### Built-in Tools
179
180
The package includes several built-in utility tools.
181
182
```python { .api }
183
def llm_version() -> str:
184
"""Get installed LLM package version."""
185
186
def llm_time() -> dict:
187
"""
188
Get current time information.
189
190
Returns:
191
dict: Contains 'utc' and 'local' time strings
192
"""
193
```
194
195
## Usage Examples
196
197
### Simple Function Tool
198
199
```python
200
import llm
201
202
def calculate_area(length: float, width: float) -> float:
203
"""Calculate the area of a rectangle."""
204
return length * width
205
206
# Create tool from function
207
area_tool = llm.Tool.function(calculate_area)
208
209
# Use with model
210
model = llm.get_model("gpt-4")
211
response = model.prompt(
212
"What's the area of a rectangle that is 5.5 meters long and 3.2 meters wide?",
213
tools=[area_tool]
214
)
215
print(response.text())
216
```
217
218
### Tool with Type Hints
219
220
```python
221
import llm
222
from typing import List, Dict, Optional
223
224
def search_database(
225
query: str,
226
table: str,
227
limit: Optional[int] = 10,
228
filters: Optional[Dict[str, str]] = None
229
) -> List[Dict[str, Any]]:
230
"""
231
Search database table with optional filters.
232
233
Args:
234
query: Search query string
235
table: Database table name
236
limit: Maximum number of results
237
filters: Optional column filters
238
239
Returns:
240
List of matching records
241
"""
242
# Implementation would query actual database
243
return [{"id": 1, "name": "Example", "query": query, "table": table}]
244
245
# Tool automatically generates schema from type hints
246
search_tool = llm.Tool.function(search_database)
247
248
model = llm.get_model()
249
response = model.prompt(
250
"Find all users named John in the customers table",
251
tools=[search_tool]
252
)
253
print(response.text())
254
```
255
256
### Tool with Attachments
257
258
```python
259
import llm
260
import matplotlib.pyplot as plt
261
import io
262
263
def create_chart(data: List[float], title: str = "Chart") -> llm.ToolOutput:
264
"""Create a chart from data and return as image attachment."""
265
plt.figure(figsize=(10, 6))
266
plt.plot(data)
267
plt.title(title)
268
plt.grid(True)
269
270
# Save to bytes
271
buffer = io.BytesIO()
272
plt.savefig(buffer, format='png')
273
buffer.seek(0)
274
plt.close()
275
276
# Return with attachment
277
attachment = llm.Attachment(content=buffer.getvalue(), type="image/png")
278
return llm.ToolOutput(
279
content=f"Created chart '{title}' with {len(data)} data points",
280
attachments=[attachment]
281
)
282
283
chart_tool = llm.Tool.function(create_chart)
284
285
model = llm.get_model("gpt-4-vision")
286
response = model.prompt(
287
"Create a chart showing the values [1, 4, 2, 8, 5, 7] with title 'Sample Data'",
288
tools=[chart_tool]
289
)
290
print(response.text())
291
```
292
293
### Error Handling with CancelToolCall
294
295
```python
296
import llm
297
298
def dangerous_operation(action: str) -> str:
299
"""Perform a potentially dangerous system operation."""
300
dangerous_actions = ["delete", "format", "shutdown", "rm -rf"]
301
302
if any(dangerous in action.lower() for dangerous in dangerous_actions):
303
raise llm.CancelToolCall(
304
f"Cannot perform dangerous action: {action}. "
305
"Please choose a safer alternative."
306
)
307
308
return f"Performed safe action: {action}"
309
310
safety_tool = llm.Tool.function(dangerous_operation)
311
312
model = llm.get_model()
313
response = model.prompt(
314
"Please delete all files in /important/data",
315
tools=[safety_tool]
316
)
317
print(response.text()) # Will explain why the action was cancelled
318
```
319
320
### Toolbox Example
321
322
```python
323
import llm
324
import os
325
import json
326
327
class FileToolbox(llm.Toolbox):
328
"""Collection of file system tools."""
329
330
def __init__(self, base_path: str = "."):
331
self.base_path = base_path
332
333
def prepare(self):
334
"""Ensure base path exists."""
335
os.makedirs(self.base_path, exist_ok=True)
336
337
def tools(self):
338
return [
339
llm.Tool.function(self.read_file),
340
llm.Tool.function(self.write_file),
341
llm.Tool.function(self.list_files),
342
]
343
344
def read_file(self, filename: str) -> str:
345
"""Read contents of a file."""
346
path = os.path.join(self.base_path, filename)
347
try:
348
with open(path, 'r') as f:
349
return f.read()
350
except FileNotFoundError:
351
raise llm.CancelToolCall(f"File not found: {filename}")
352
353
def write_file(self, filename: str, content: str) -> str:
354
"""Write content to a file."""
355
path = os.path.join(self.base_path, filename)
356
with open(path, 'w') as f:
357
f.write(content)
358
return f"Wrote {len(content)} characters to {filename}"
359
360
def list_files(self) -> List[str]:
361
"""List files in the base directory."""
362
return os.listdir(self.base_path)
363
364
# Use toolbox
365
file_tools = FileToolbox("/safe/directory")
366
tools = list(file_tools.tools())
367
368
model = llm.get_model()
369
response = model.prompt(
370
"List the files, then read config.json if it exists",
371
tools=tools
372
)
373
print(response.text())
374
```
375
376
### Tool Chaining
377
378
```python
379
import llm
380
import requests
381
382
def fetch_url(url: str) -> str:
383
"""Fetch content from a URL."""
384
response = requests.get(url)
385
response.raise_for_status()
386
return response.text[:1000] # Truncate for brevity
387
388
def analyze_text(text: str) -> dict:
389
"""Analyze text and return word count, sentence count, etc."""
390
words = len(text.split())
391
sentences = text.count('.') + text.count('!') + text.count('?')
392
return {
393
"word_count": words,
394
"sentence_count": sentences,
395
"character_count": len(text)
396
}
397
398
fetch_tool = llm.Tool.function(fetch_url)
399
analyze_tool = llm.Tool.function(analyze_text)
400
401
model = llm.get_model("gpt-4")
402
response = model.prompt(
403
"Fetch the content from https://example.com and analyze its text statistics",
404
tools=[fetch_tool, analyze_tool]
405
)
406
print(response.text())
407
```
408
409
### Async Tool Operations
410
411
```python
412
import llm
413
import asyncio
414
import aiohttp
415
416
async def async_fetch(url: str) -> str:
417
"""Asynchronously fetch content from URL."""
418
async with aiohttp.ClientSession() as session:
419
async with session.get(url) as response:
420
return await response.text()
421
422
# Tools work with both sync and async functions
423
async_tool = llm.Tool.function(async_fetch)
424
425
async def main():
426
model = llm.get_async_model("gpt-4")
427
response = await model.prompt(
428
"Fetch the content from https://httpbin.org/json",
429
tools=[async_tool]
430
)
431
text = await response.text()
432
print(text)
433
434
asyncio.run(main())
435
```
436
437
### Tool Registration via Plugin
438
439
```python
440
import llm
441
442
@llm.hookimpl
443
def register_tools(register):
444
"""Register tools via plugin system."""
445
446
def weather_tool(location: str) -> str:
447
"""Get weather for a location."""
448
return f"Weather in {location}: Sunny, 72°F"
449
450
def time_tool() -> str:
451
"""Get current time."""
452
from datetime import datetime
453
return datetime.now().strftime("%Y-%m-%d %H:%M:%S")
454
455
# Register individual tools
456
register(weather_tool, name="get_weather")
457
register(time_tool, name="current_time")
458
459
# Register toolbox
460
class UtilityToolbox(llm.Toolbox):
461
def tools(self):
462
return [
463
llm.Tool.function(weather_tool),
464
llm.Tool.function(time_tool)
465
]
466
467
register(UtilityToolbox, name="utilities")
468
469
# Tools are automatically available after plugin loading
470
all_tools = llm.get_tools()
471
print(f"Available tools: {list(all_tools.keys())}")
472
```
473
474
This comprehensive tool system enables LLMs to perform complex operations while maintaining safety through controlled execution environments and proper error handling. The combination of individual tools and organized toolboxes provides flexibility for both simple function calling and complex multi-step workflows.