0
# Task Execution
1
2
Running AI tasks with model adapters, including configuration, streaming support, and output formatting. Adapters provide a unified interface for executing tasks across different AI providers and models.
3
4
## Capabilities
5
6
### Adapter Creation
7
8
Create adapters for executing tasks with specific models and providers.
9
10
```python { .api }
11
from kiln_ai.adapters import adapter_for_task
12
13
def adapter_for_task(
14
task,
15
model_name: str,
16
provider: str | None = None,
17
config: dict | None = None
18
):
19
"""
20
Create an adapter for executing a task with a specific model.
21
22
Parameters:
23
- task: Task instance to run
24
- model_name (str): Name of the model to use (e.g., "gpt_4o", "claude_3_5_sonnet")
25
- provider (str | None): Provider name (e.g., "openai", "anthropic", "groq")
26
- config (dict | None): Additional configuration options (temperature, max_tokens, etc.)
27
28
Returns:
29
BaseAdapter instance configured for the task
30
"""
31
```
32
33
### Base Adapter Interface
34
35
Abstract base class defining the adapter interface.
36
37
```python { .api }
38
class BaseAdapter:
39
"""
40
Abstract adapter interface for model execution.
41
42
Methods:
43
- invoke(): Execute task synchronously
44
- stream(): Execute task with streaming response
45
- validate_config(): Validate adapter configuration
46
"""
47
48
async def invoke(self, input_data: str) -> 'RunOutput':
49
"""
50
Execute task synchronously.
51
52
Parameters:
53
- input_data (str): Input for the task
54
55
Returns:
56
RunOutput: Execution result with output and metadata
57
"""
58
59
async def stream(self, input_data: str):
60
"""
61
Execute task with streaming response.
62
63
Parameters:
64
- input_data (str): Input for the task
65
66
Yields:
67
str: Streaming output chunks
68
"""
69
70
def validate_config(self) -> None:
71
"""
72
Validate adapter configuration.
73
74
Raises:
75
ValueError: If configuration is invalid
76
"""
77
78
class AdapterConfig:
79
"""
80
Configuration for adapters.
81
82
Properties:
83
- temperature (float | None): Sampling temperature (0-2)
84
- max_tokens (int | None): Maximum tokens to generate
85
- top_p (float | None): Nucleus sampling parameter (0-1)
86
- top_k (int | None): Top-k sampling parameter
87
- stop (list[str] | None): Stop sequences
88
- seed (int | None): Random seed for reproducibility
89
"""
90
```
91
92
### Run Output
93
94
Container for model run results with metadata and usage tracking.
95
96
```python { .api }
97
class RunOutput:
98
"""
99
Container for model run results.
100
101
Properties:
102
- output (str): Generated output text
103
- metadata (dict): Additional metadata about the run
104
- usage (Usage | None): Token usage information
105
- raw_response (dict | None): Raw response from the model
106
"""
107
```
108
109
### LiteLLM Adapter
110
111
LiteLLM adapter supporting 100+ models through unified interface.
112
113
```python { .api }
114
class LiteLlmAdapter(BaseAdapter):
115
"""
116
LiteLLM adapter implementation supporting 100+ models.
117
118
Supports:
119
- OpenAI, Anthropic, Google, Groq, Together AI, and many more
120
- Streaming and non-streaming modes
121
- Tool/function calling
122
- Structured JSON output
123
"""
124
125
async def invoke(self, input_data: str) -> 'RunOutput':
126
"""
127
Execute task synchronously with LiteLLM.
128
129
Parameters:
130
- input_data (str): Input for the task
131
132
Returns:
133
RunOutput: Execution result
134
"""
135
136
async def stream(self, input_data: str):
137
"""
138
Execute task with streaming response.
139
140
Parameters:
141
- input_data (str): Input for the task
142
143
Yields:
144
str: Streaming output chunks
145
"""
146
147
class LiteLlmConfig:
148
"""
149
LiteLLM-specific configuration.
150
151
Properties:
152
- model (str): Model identifier
153
- provider (str): Provider name
154
- api_key (str | None): API key for authentication
155
- base_url (str | None): Custom API base URL
156
- temperature (float | None): Sampling temperature
157
- max_tokens (int | None): Maximum tokens
158
- top_p (float | None): Nucleus sampling
159
- stop (list[str] | None): Stop sequences
160
"""
161
```
162
163
### Provider Configuration
164
165
Get provider-specific configuration for LiteLLM.
166
167
```python { .api }
168
def litellm_core_provider_config(provider_name: str):
169
"""
170
Get LiteLLM provider configuration.
171
172
Parameters:
173
- provider_name (str): Provider identifier
174
175
Returns:
176
LiteLlmCoreConfig: Provider configuration with API settings
177
"""
178
179
class LiteLlmCoreConfig:
180
"""
181
LiteLLM core configuration.
182
183
Properties:
184
- api_key (str | None): API key
185
- api_base (str | None): Base URL for API
186
- timeout (int | None): Request timeout in seconds
187
"""
188
```
189
190
### Provider Tools
191
192
Utilities for working with model providers and configurations.
193
194
```python { .api }
195
def get_config_value(key: str, default=None):
196
"""
197
Get configuration value from Kiln config.
198
199
Parameters:
200
- key (str): Configuration key
201
- default: Default value if key not found
202
203
Returns:
204
Any: Configuration value
205
"""
206
207
def check_provider_warnings(provider: str, model: str) -> list:
208
"""
209
Check for provider capability warnings.
210
211
Parameters:
212
- provider (str): Provider name
213
- model (str): Model identifier
214
215
Returns:
216
list[ModelProviderWarning]: List of warnings
217
"""
218
219
def builtin_model_from(model_id: str):
220
"""
221
Get built-in model information.
222
223
Parameters:
224
- model_id (str): Model identifier
225
226
Returns:
227
KilnModel | None: Model definition or None if not found
228
"""
229
230
def core_provider(provider: str) -> str:
231
"""
232
Get core provider name from provider identifier.
233
234
Parameters:
235
- provider (str): Provider identifier (may include custom prefix)
236
237
Returns:
238
str: Core provider name
239
"""
240
241
def parse_custom_model_id(model_id: str) -> tuple:
242
"""
243
Parse custom model identifier.
244
245
Parameters:
246
- model_id (str): Custom model ID in format "provider::model"
247
248
Returns:
249
tuple[str, str]: (provider, model) tuple
250
"""
251
252
def kiln_model_provider_from(provider_name: str):
253
"""
254
Get model provider instance.
255
256
Parameters:
257
- provider_name (str): Provider name
258
259
Returns:
260
KilnModelProvider: Provider instance
261
"""
262
263
def lite_llm_provider_model(provider: str, model: str) -> str:
264
"""
265
Format model identifier for LiteLLM.
266
267
Parameters:
268
- provider (str): Provider name
269
- model (str): Model identifier
270
271
Returns:
272
str: LiteLLM-formatted model string
273
"""
274
275
def finetune_from_id(finetune_id: str, parent_task):
276
"""
277
Load fine-tune by ID.
278
279
Parameters:
280
- finetune_id (str): Fine-tune identifier
281
- parent_task: Parent task instance
282
283
Returns:
284
Finetune: Fine-tune instance
285
"""
286
287
def finetune_provider_model(finetune) -> tuple:
288
"""
289
Get provider and model for fine-tune.
290
291
Parameters:
292
- finetune: Finetune instance
293
294
Returns:
295
tuple[str, str]: (provider, model) tuple
296
"""
297
298
def get_model_and_provider(
299
model_name: str | None,
300
provider: str | None,
301
finetune_id: str | None,
302
task
303
) -> tuple:
304
"""
305
Resolve model and provider from parameters.
306
307
Parameters:
308
- model_name (str | None): Model name
309
- provider (str | None): Provider name
310
- finetune_id (str | None): Fine-tune ID
311
- task: Task instance
312
313
Returns:
314
tuple[KilnModel, str]: (model, provider) tuple
315
"""
316
317
def provider_name_from_id(provider_id: str) -> str:
318
"""
319
Extract provider name from identifier.
320
321
Parameters:
322
- provider_id (str): Provider identifier
323
324
Returns:
325
str: Provider name
326
"""
327
328
def lite_llm_core_config_for_provider(provider: str):
329
"""
330
Get LiteLLM core config for provider.
331
332
Parameters:
333
- provider (str): Provider name
334
335
Returns:
336
LiteLlmCoreConfig: Core configuration
337
"""
338
339
class ModelProviderWarning:
340
"""
341
Warning about provider capabilities.
342
343
Properties:
344
- message (str): Warning message
345
- severity (str): Warning severity level
346
"""
347
```
348
349
### Chat Formatting
350
351
Format messages for chat-based model APIs.
352
353
```python { .api }
354
class ChatFormatter:
355
"""
356
Format messages for chat APIs.
357
358
Methods:
359
- format(): Format single message
360
- format_messages(): Format message list
361
"""
362
363
def format(self, content: str, role: str = "user") -> dict:
364
"""
365
Format single message.
366
367
Parameters:
368
- content (str): Message content
369
- role (str): Message role (user, assistant, system)
370
371
Returns:
372
dict: Formatted message
373
"""
374
375
def format_messages(self, messages: list) -> list:
376
"""
377
Format message list.
378
379
Parameters:
380
- messages (list): List of ChatMessage instances
381
382
Returns:
383
list[dict]: Formatted messages
384
"""
385
386
class ChatMessage:
387
"""
388
Single chat message.
389
390
Properties:
391
- role (str): Message role (user, assistant, system)
392
- content (str): Message content
393
"""
394
395
class ChatStrategy:
396
"""
397
Chat formatting strategy.
398
399
Values:
400
- openai: OpenAI chat format
401
- anthropic: Anthropic chat format
402
- generic: Generic chat format
403
"""
404
openai = "openai"
405
anthropic = "anthropic"
406
generic = "generic"
407
408
def get_chat_formatter(strategy: str) -> 'ChatFormatter':
409
"""
410
Get formatter instance for strategy.
411
412
Parameters:
413
- strategy (str): Chat strategy name
414
415
Returns:
416
ChatFormatter: Formatter instance
417
"""
418
```
419
420
## Usage Examples
421
422
### Basic Task Execution
423
424
```python
425
from kiln_ai.datamodel import Task
426
from kiln_ai.adapters import adapter_for_task
427
428
# Create or load a task
429
task = Task(
430
name="summarizer",
431
instruction="Summarize the following text concisely."
432
)
433
434
# Create adapter with specific model
435
adapter = adapter_for_task(
436
task,
437
model_name="gpt_4o",
438
provider="openai",
439
config={
440
"temperature": 0.7,
441
"max_tokens": 500
442
}
443
)
444
445
# Execute task
446
input_text = "Long article text here..."
447
result = await adapter.invoke(input_text)
448
print(f"Summary: {result.output}")
449
print(f"Tokens used: {result.usage.total_tokens}")
450
```
451
452
### Streaming Execution
453
454
```python
455
from kiln_ai.datamodel import Task
456
from kiln_ai.adapters import adapter_for_task
457
458
task = Task(
459
name="story_generator",
460
instruction="Write a creative story about the given topic."
461
)
462
463
adapter = adapter_for_task(task, model_name="claude_3_5_sonnet", provider="anthropic")
464
465
# Stream response
466
async for chunk in adapter.stream("space exploration"):
467
print(chunk, end="", flush=True)
468
```
469
470
### Multiple Providers
471
472
```python
473
from kiln_ai.datamodel import Task
474
from kiln_ai.adapters import adapter_for_task
475
476
task = Task.load_from_file("path/to/task.kiln")
477
478
# Test same task with different models
479
models = [
480
("gpt_4o", "openai"),
481
("claude_3_5_sonnet", "anthropic"),
482
("llama_3_1_8b", "groq")
483
]
484
485
input_data = "Test input"
486
487
for model_name, provider in models:
488
adapter = adapter_for_task(task, model_name=model_name, provider=provider)
489
result = await adapter.invoke(input_data)
490
print(f"{model_name}: {result.output}")
491
```
492
493
### Custom Configuration
494
495
```python
496
from kiln_ai.adapters import adapter_for_task
497
498
# Advanced configuration
499
config = {
500
"temperature": 0.9,
501
"max_tokens": 2000,
502
"top_p": 0.95,
503
"stop": ["END", "STOP"],
504
"seed": 42 # For reproducibility
505
}
506
507
adapter = adapter_for_task(
508
task,
509
model_name="gpt_4o",
510
provider="openai",
511
config=config
512
)
513
514
result = await adapter.invoke("Generate creative content")
515
```
516
517
### Structured Output
518
519
```python
520
from kiln_ai.datamodel import Task
521
from kiln_ai.adapters import adapter_for_task
522
import json
523
524
# Task with JSON schema
525
task = Task(
526
name="data_extractor",
527
instruction="Extract structured information from the text.",
528
output_json_schema=json.dumps({
529
"type": "object",
530
"properties": {
531
"name": {"type": "string"},
532
"age": {"type": "integer"},
533
"email": {"type": "string"}
534
},
535
"required": ["name", "email"]
536
})
537
)
538
539
adapter = adapter_for_task(task, model_name="gpt_4o", provider="openai")
540
541
result = await adapter.invoke("John Doe is 30 years old. Email: john@example.com")
542
data = json.loads(result.output)
543
print(f"Name: {data['name']}, Email: {data['email']}")
544
```
545
546
### Error Handling
547
548
```python
549
from kiln_ai.adapters import adapter_for_task
550
from kiln_ai.adapters.provider_tools import check_provider_warnings
551
552
task = Task.load_from_file("path/to/task.kiln")
553
554
# Check for warnings before running
555
warnings = check_provider_warnings("openai", "gpt_4o")
556
if warnings:
557
for warning in warnings:
558
print(f"Warning: {warning.message}")
559
560
# Create adapter with error handling
561
try:
562
adapter = adapter_for_task(task, model_name="gpt_4o", provider="openai")
563
adapter.validate_config()
564
result = await adapter.invoke("input text")
565
except ValueError as e:
566
print(f"Configuration error: {e}")
567
except Exception as e:
568
print(f"Execution error: {e}")
569
```
570
571
### Fine-tuned Models
572
573
```python
574
from kiln_ai.datamodel import Task, Finetune
575
from kiln_ai.adapters import adapter_for_task
576
577
# Load task with fine-tuned model
578
task = Task.load_from_file("path/to/task.kiln")
579
580
# Use fine-tuned model
581
adapter = adapter_for_task(
582
task,
583
model_name=None,
584
provider=None,
585
config={"finetune_id": "ft-abc123"}
586
)
587
588
result = await adapter.invoke("input for fine-tuned model")
589
```
590