or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

configuration.mddatamodel.mdevaluation.mdfine-tuning.mdindex.mdmodels.mdprompts.mdrag-embeddings.mdtask-execution.mdtools.md

prompts.mddocs/

0

# Prompt Builders

1

2

Multiple prompt building strategies including simple, few-shot, multi-shot, chain-of-thought, and saved prompts. Prompt builders construct appropriate prompts for tasks based on different learning approaches.

3

4

## Capabilities

5

6

### Prompt Builder Creation

7

8

Get prompt builder instances from identifiers.

9

10

```python { .api }

11

from kiln_ai.adapters.prompt_builders import prompt_builder_from_id, chain_of_thought_prompt

12

13

def prompt_builder_from_id(prompt_id: str, task):

14

"""

15

Get prompt builder instance from identifier.

16

17

Parameters:

18

- prompt_id (str): Prompt builder type identifier (e.g., "simple", "few_shot", "cot")

19

- task: Task instance for context

20

21

Returns:

22

BasePromptBuilder: Prompt builder instance

23

"""

24

25

def chain_of_thought_prompt(task) -> str:

26

"""

27

Generate chain-of-thought prompt text for a task.

28

29

Parameters:

30

- task: Task instance

31

32

Returns:

33

str: Generated CoT prompt text

34

"""

35

```

36

37

### Base Prompt Builder

38

39

Abstract base class for all prompt builders.

40

41

```python { .api }

42

class BasePromptBuilder:

43

"""

44

Abstract base class for prompt builders.

45

46

Methods:

47

- build_prompt(): Construct the complete prompt

48

- build_system_message(): Build system message component

49

"""

50

51

def build_prompt(self, task_input: str) -> str:

52

"""

53

Construct complete prompt for task input.

54

55

Parameters:

56

- task_input (str): Input data for the task

57

58

Returns:

59

str: Constructed prompt

60

"""

61

62

def build_system_message(self) -> str:

63

"""

64

Build system message component.

65

66

Returns:

67

str: System message text

68

"""

69

```

70

71

### Simple Prompt Builder

72

73

Basic prompt construction with task instructions.

74

75

```python { .api }

76

class SimplePromptBuilder(BasePromptBuilder):

77

"""

78

Simple prompt construction with task instructions and input.

79

80

Builds prompts in format:

81

[Task instruction]

82

83

Input: [task input]

84

"""

85

86

def __init__(self, task):

87

"""

88

Initialize simple prompt builder.

89

90

Parameters:

91

- task: Task instance

92

"""

93

94

def build_prompt(self, task_input: str) -> str:

95

"""

96

Build simple prompt.

97

98

Parameters:

99

- task_input (str): Input data

100

101

Returns:

102

str: Simple prompt text

103

"""

104

```

105

106

### Short Prompt Builder

107

108

Concise prompt construction for efficient context usage.

109

110

```python { .api }

111

class ShortPromptBuilder(BasePromptBuilder):

112

"""

113

Concise prompt construction minimizing token usage.

114

115

Optimized for:

116

- Limited context windows

117

- Cost reduction

118

- Fast inference

119

"""

120

121

def __init__(self, task):

122

"""

123

Initialize short prompt builder.

124

125

Parameters:

126

- task: Task instance

127

"""

128

```

129

130

### Few-Shot Prompt Builder

131

132

Few-shot learning with example demonstrations.

133

134

```python { .api }

135

class FewShotPromptBuilder(BasePromptBuilder):

136

"""

137

Few-shot learning prompts with example demonstrations.

138

139

Includes 3-5 examples from task runs to demonstrate desired behavior.

140

Examples are selected from high-quality rated task runs.

141

"""

142

143

def __init__(self, task):

144

"""

145

Initialize few-shot prompt builder.

146

147

Parameters:

148

- task: Task instance with existing runs for examples

149

"""

150

151

def build_prompt(self, task_input: str) -> str:

152

"""

153

Build few-shot prompt with examples.

154

155

Parameters:

156

- task_input (str): Input data

157

158

Returns:

159

str: Few-shot prompt with examples

160

"""

161

```

162

163

### Multi-Shot Prompt Builder

164

165

Multiple example demonstrations for complex tasks.

166

167

```python { .api }

168

class MultiShotPromptBuilder(BasePromptBuilder):

169

"""

170

Multi-shot prompts with many example demonstrations.

171

172

Includes 5+ examples for complex tasks requiring extensive demonstration.

173

Uses more context but provides better guidance for difficult tasks.

174

"""

175

176

def __init__(self, task):

177

"""

178

Initialize multi-shot prompt builder.

179

180

Parameters:

181

- task: Task instance with many runs for examples

182

"""

183

```

184

185

### Chain-of-Thought Prompt Builder

186

187

Chain-of-thought reasoning prompts.

188

189

```python { .api }

190

class SimpleChainOfThoughtPromptBuilder(BasePromptBuilder):

191

"""

192

Chain-of-thought reasoning prompts encouraging step-by-step thinking.

193

194

Instructs model to:

195

1. Break down the problem

196

2. Think through each step

197

3. Provide reasoning before final answer

198

"""

199

200

def __init__(self, task):

201

"""

202

Initialize CoT prompt builder.

203

204

Parameters:

205

- task: Task instance

206

"""

207

208

def build_prompt(self, task_input: str) -> str:

209

"""

210

Build chain-of-thought prompt.

211

212

Parameters:

213

- task_input (str): Input data

214

215

Returns:

216

str: CoT prompt with reasoning instructions

217

"""

218

```

219

220

### Few-Shot Chain-of-Thought

221

222

Combines few-shot learning with chain-of-thought reasoning.

223

224

```python { .api }

225

class FewShotChainOfThoughtPromptBuilder(BasePromptBuilder):

226

"""

227

Few-shot learning with chain-of-thought reasoning.

228

229

Provides examples that include:

230

- Input

231

- Step-by-step reasoning

232

- Final output

233

234

Effective for complex reasoning tasks.

235

"""

236

237

def __init__(self, task):

238

"""

239

Initialize few-shot CoT prompt builder.

240

241

Parameters:

242

- task: Task instance with example runs

243

"""

244

```

245

246

### Multi-Shot Chain-of-Thought

247

248

Multiple examples with chain-of-thought reasoning.

249

250

```python { .api }

251

class MultiShotChainOfThoughtPromptBuilder(BasePromptBuilder):

252

"""

253

Multi-shot prompts with chain-of-thought reasoning.

254

255

Many examples with detailed reasoning steps.

256

Best for very complex tasks requiring extensive demonstration.

257

"""

258

259

def __init__(self, task):

260

"""

261

Initialize multi-shot CoT prompt builder.

262

263

Parameters:

264

- task: Task instance with many example runs

265

"""

266

```

267

268

### Saved Prompt Builder

269

270

Use saved/custom prompts from task configuration.

271

272

```python { .api }

273

class SavedPromptBuilder(BasePromptBuilder):

274

"""

275

Use saved/custom prompts from task.

276

277

Loads prompt content from saved prompt configuration,

278

allowing fully customized prompt templates.

279

"""

280

281

def __init__(self, task, prompt_id: str):

282

"""

283

Initialize saved prompt builder.

284

285

Parameters:

286

- task: Task instance

287

- prompt_id (str): ID of saved prompt to use

288

"""

289

290

def build_prompt(self, task_input: str) -> str:

291

"""

292

Build prompt from saved template.

293

294

Parameters:

295

- task_input (str): Input data

296

297

Returns:

298

str: Prompt from saved template

299

"""

300

```

301

302

### Repairs Prompt Builder

303

304

Prompt builder for repairing invalid task outputs.

305

306

```python { .api }

307

class RepairsPromptBuilder(BasePromptBuilder):

308

"""

309

Repair-focused prompts for fixing invalid outputs.

310

311

Used to correct outputs that:

312

- Failed schema validation

313

- Don't meet requirements

314

- Need formatting fixes

315

"""

316

317

def __init__(self, task, original_input: str, invalid_output: str, error: str):

318

"""

319

Initialize repairs prompt builder.

320

321

Parameters:

322

- task: Task instance

323

- original_input (str): Original task input

324

- invalid_output (str): Invalid output to repair

325

- error (str): Error message describing the issue

326

"""

327

```

328

329

### Task Run Config Prompt Builder

330

331

Task run-specific prompt configuration.

332

333

```python { .api }

334

class TaskRunConfigPromptBuilder(BasePromptBuilder):

335

"""

336

Task run-specific prompt builder.

337

338

Uses configuration from specific task run for custom prompt behavior.

339

"""

340

341

def __init__(self, task, task_run_config: dict):

342

"""

343

Initialize task run config prompt builder.

344

345

Parameters:

346

- task: Task instance

347

- task_run_config (dict): Configuration for this specific run

348

"""

349

```

350

351

### Fine-Tune Prompt Builder

352

353

Prompts formatted for fine-tuning datasets.

354

355

```python { .api }

356

class FineTunePromptBuilder(BasePromptBuilder):

357

"""

358

Fine-tune formatted prompts.

359

360

Formats prompts specifically for fine-tuning training data,

361

ensuring consistency with fine-tuned model expectations.

362

"""

363

364

def __init__(self, task):

365

"""

366

Initialize fine-tune prompt builder.

367

368

Parameters:

369

- task: Task instance

370

"""

371

```

372

373

## Usage Examples

374

375

### Using Different Prompt Strategies

376

377

```python

378

from kiln_ai.datamodel import Task

379

from kiln_ai.adapters.prompt_builders import prompt_builder_from_id

380

from kiln_ai.adapters import adapter_for_task

381

382

# Create task

383

task = Task(

384

name="question_answerer",

385

instruction="Answer the question accurately and concisely."

386

)

387

388

# Try different prompt strategies

389

strategies = ["simple", "few_shot", "cot", "few_shot_cot"]

390

391

for strategy in strategies:

392

builder = prompt_builder_from_id(strategy, task)

393

prompt = builder.build_prompt("What is machine learning?")

394

print(f"\n{strategy.upper()} PROMPT:")

395

print(prompt)

396

```

397

398

### Simple Prompt

399

400

```python

401

from kiln_ai.adapters.prompt_builders import SimplePromptBuilder

402

from kiln_ai.datamodel import Task

403

404

task = Task(

405

name="translator",

406

instruction="Translate the text to French."

407

)

408

409

builder = SimplePromptBuilder(task)

410

prompt = builder.build_prompt("Hello, how are you?")

411

print(prompt)

412

# Output:

413

# Translate the text to French.

414

#

415

# Input: Hello, how are you?

416

```

417

418

### Few-Shot Learning

419

420

```python

421

from kiln_ai.datamodel import Task, TaskRun, TaskOutput

422

from kiln_ai.adapters.prompt_builders import FewShotPromptBuilder

423

424

# Create task with example runs

425

task = Task(

426

name="sentiment_classifier",

427

instruction="Classify the sentiment as positive, negative, or neutral."

428

)

429

430

# Add example runs

431

examples = [

432

("I love this product!", "positive"),

433

("This is terrible.", "negative"),

434

("It's okay.", "neutral")

435

]

436

437

for input_text, output_text in examples:

438

run = TaskRun(

439

parent=task,

440

input=input_text,

441

output=TaskOutput(output=output_text)

442

)

443

run.save_to_file()

444

445

# Build few-shot prompt

446

builder = FewShotPromptBuilder(task)

447

prompt = builder.build_prompt("This is amazing!")

448

print(prompt)

449

# Includes examples from the task runs

450

```

451

452

### Chain-of-Thought Reasoning

453

454

```python

455

from kiln_ai.adapters.prompt_builders import (

456

SimpleChainOfThoughtPromptBuilder,

457

chain_of_thought_prompt

458

)

459

from kiln_ai.datamodel import Task

460

461

task = Task(

462

name="math_solver",

463

instruction="Solve the math problem step by step."

464

)

465

466

# Method 1: Use builder

467

builder = SimpleChainOfThoughtPromptBuilder(task)

468

prompt = builder.build_prompt("What is 25% of 80?")

469

print(prompt)

470

471

# Method 2: Use helper function

472

cot_text = chain_of_thought_prompt(task)

473

print(f"\nCoT instructions:\n{cot_text}")

474

```

475

476

### Saved Custom Prompts

477

478

```python

479

from kiln_ai.datamodel import Task, Prompt

480

from kiln_ai.adapters.prompt_builders import SavedPromptBuilder

481

482

# Create task

483

task = Task(

484

name="creative_writer",

485

instruction="Write creative content."

486

)

487

task.save_to_file()

488

489

# Create saved prompt

490

saved_prompt = Prompt(

491

parent=task,

492

name="story_prompt",

493

content="""You are a creative storyteller.

494

495

Given a topic, write an engaging short story.

496

497

Topic: {input}

498

499

Story:"""

500

)

501

saved_prompt.save_to_file()

502

503

# Use saved prompt

504

builder = SavedPromptBuilder(task, saved_prompt.id)

505

prompt = builder.build_prompt("space exploration")

506

print(prompt)

507

```

508

509

### Combining with Adapters

510

511

```python

512

from kiln_ai.datamodel import Task

513

from kiln_ai.adapters import adapter_for_task

514

from kiln_ai.adapters.prompt_builders import prompt_builder_from_id

515

516

task = Task(

517

name="code_explainer",

518

instruction="Explain what the code does."

519

)

520

521

# Use specific prompt strategy with adapter

522

adapter = adapter_for_task(

523

task,

524

model_name="gpt_4o",

525

provider="openai"

526

)

527

528

# The adapter will use the specified prompt strategy

529

# by default, but you can also build prompts manually

530

builder = prompt_builder_from_id("cot", task)

531

custom_prompt = builder.build_prompt("def fibonacci(n): ...")

532

533

# Use with adapter

534

result = await adapter.invoke("def fibonacci(n): ...")

535

```

536

537

### Repair Prompts

538

539

```python

540

from kiln_ai.adapters.prompt_builders import RepairsPromptBuilder

541

from kiln_ai.datamodel import Task

542

import json

543

544

task = Task(

545

name="json_generator",

546

instruction="Generate valid JSON.",

547

output_json_schema=json.dumps({

548

"type": "object",

549

"properties": {

550

"name": {"type": "string"},

551

"age": {"type": "integer"}

552

}

553

})

554

)

555

556

# Original attempt produced invalid output

557

original_input = "John, 30 years old"

558

invalid_output = '{"name": "John", "age": "thirty"}' # age should be int

559

error = "Field 'age' must be integer, got string"

560

561

# Build repair prompt

562

builder = RepairsPromptBuilder(task, original_input, invalid_output, error)

563

repair_prompt = builder.build_prompt(original_input)

564

print(repair_prompt)

565

# Includes original input, invalid output, and error details

566

```

567

568

### Multi-Shot for Complex Tasks

569

570

```python

571

from kiln_ai.datamodel import Task, TaskRun, TaskOutput

572

from kiln_ai.adapters.prompt_builders import MultiShotPromptBuilder

573

574

# Complex task requiring many examples

575

task = Task(

576

name="code_reviewer",

577

instruction="Review code and provide detailed feedback."

578

)

579

580

# Add many example runs (8+)

581

for i in range(10):

582

run = TaskRun(

583

parent=task,

584

input=f"# Example code {i}\n...",

585

output=TaskOutput(output=f"Review {i}: ...")

586

)

587

run.save_to_file()

588

589

# Build multi-shot prompt with many examples

590

builder = MultiShotPromptBuilder(task)

591

prompt = builder.build_prompt("def buggy_function(): ...")

592

print(f"Prompt includes {len(task.runs())} examples")

593

```

594

595

### Comparing Strategies

596

597

```python

598

from kiln_ai.datamodel import Task

599

from kiln_ai.adapters import adapter_for_task

600

from kiln_ai.adapters.prompt_builders import prompt_builder_from_id

601

602

async def compare_strategies(task, input_data):

603

strategies = ["simple", "few_shot", "cot", "few_shot_cot"]

604

results = {}

605

606

for strategy in strategies:

607

# Use different prompt strategy

608

builder = prompt_builder_from_id(strategy, task)

609

610

# Create adapter (would use the strategy internally)

611

adapter = adapter_for_task(task, model_name="gpt_4o", provider="openai")

612

613

# Run task

614

result = await adapter.invoke(input_data)

615

results[strategy] = result.output

616

617

return results

618

619

# Compare outputs

620

task = Task.load_from_file("path/to/task.kiln")

621

comparison = await compare_strategies(task, "test input")

622

623

for strategy, output in comparison.items():

624

print(f"\n{strategy}:")

625

print(output)

626

```

627