CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-kiln-ai

Kiln AI is a comprehensive platform for building, evaluating, and deploying AI systems with dataset management, model fine-tuning, RAG, and evaluation capabilities.

Overview
Eval results
Files

prompts.mddocs/

Prompt Builders

Multiple prompt building strategies including simple, few-shot, multi-shot, chain-of-thought, and saved prompts. Prompt builders construct appropriate prompts for tasks based on different learning approaches.

Capabilities

Prompt Builder Creation

Get prompt builder instances from identifiers.

from kiln_ai.adapters.prompt_builders import prompt_builder_from_id, chain_of_thought_prompt

def prompt_builder_from_id(prompt_id: str, task):
    """
    Get prompt builder instance from identifier.

    Parameters:
    - prompt_id (str): Prompt builder type identifier (e.g., "simple", "few_shot", "cot")
    - task: Task instance for context

    Returns:
    BasePromptBuilder: Prompt builder instance
    """

def chain_of_thought_prompt(task) -> str:
    """
    Generate chain-of-thought prompt text for a task.

    Parameters:
    - task: Task instance

    Returns:
    str: Generated CoT prompt text
    """

Base Prompt Builder

Abstract base class for all prompt builders.

class BasePromptBuilder:
    """
    Abstract base class for prompt builders.

    Methods:
    - build_prompt(): Construct the complete prompt
    - build_system_message(): Build system message component
    """

    def build_prompt(self, task_input: str) -> str:
        """
        Construct complete prompt for task input.

        Parameters:
        - task_input (str): Input data for the task

        Returns:
        str: Constructed prompt
        """

    def build_system_message(self) -> str:
        """
        Build system message component.

        Returns:
        str: System message text
        """

Simple Prompt Builder

Basic prompt construction with task instructions.

class SimplePromptBuilder(BasePromptBuilder):
    """
    Simple prompt construction with task instructions and input.

    Builds prompts in format:
    [Task instruction]

    Input: [task input]
    """

    def __init__(self, task):
        """
        Initialize simple prompt builder.

        Parameters:
        - task: Task instance
        """

    def build_prompt(self, task_input: str) -> str:
        """
        Build simple prompt.

        Parameters:
        - task_input (str): Input data

        Returns:
        str: Simple prompt text
        """

Short Prompt Builder

Concise prompt construction for efficient context usage.

class ShortPromptBuilder(BasePromptBuilder):
    """
    Concise prompt construction minimizing token usage.

    Optimized for:
    - Limited context windows
    - Cost reduction
    - Fast inference
    """

    def __init__(self, task):
        """
        Initialize short prompt builder.

        Parameters:
        - task: Task instance
        """

Few-Shot Prompt Builder

Few-shot learning with example demonstrations.

class FewShotPromptBuilder(BasePromptBuilder):
    """
    Few-shot learning prompts with example demonstrations.

    Includes 3-5 examples from task runs to demonstrate desired behavior.
    Examples are selected from high-quality rated task runs.
    """

    def __init__(self, task):
        """
        Initialize few-shot prompt builder.

        Parameters:
        - task: Task instance with existing runs for examples
        """

    def build_prompt(self, task_input: str) -> str:
        """
        Build few-shot prompt with examples.

        Parameters:
        - task_input (str): Input data

        Returns:
        str: Few-shot prompt with examples
        """

Multi-Shot Prompt Builder

Multiple example demonstrations for complex tasks.

class MultiShotPromptBuilder(BasePromptBuilder):
    """
    Multi-shot prompts with many example demonstrations.

    Includes 5+ examples for complex tasks requiring extensive demonstration.
    Uses more context but provides better guidance for difficult tasks.
    """

    def __init__(self, task):
        """
        Initialize multi-shot prompt builder.

        Parameters:
        - task: Task instance with many runs for examples
        """

Chain-of-Thought Prompt Builder

Chain-of-thought reasoning prompts.

class SimpleChainOfThoughtPromptBuilder(BasePromptBuilder):
    """
    Chain-of-thought reasoning prompts encouraging step-by-step thinking.

    Instructs model to:
    1. Break down the problem
    2. Think through each step
    3. Provide reasoning before final answer
    """

    def __init__(self, task):
        """
        Initialize CoT prompt builder.

        Parameters:
        - task: Task instance
        """

    def build_prompt(self, task_input: str) -> str:
        """
        Build chain-of-thought prompt.

        Parameters:
        - task_input (str): Input data

        Returns:
        str: CoT prompt with reasoning instructions
        """

Few-Shot Chain-of-Thought

Combines few-shot learning with chain-of-thought reasoning.

class FewShotChainOfThoughtPromptBuilder(BasePromptBuilder):
    """
    Few-shot learning with chain-of-thought reasoning.

    Provides examples that include:
    - Input
    - Step-by-step reasoning
    - Final output

    Effective for complex reasoning tasks.
    """

    def __init__(self, task):
        """
        Initialize few-shot CoT prompt builder.

        Parameters:
        - task: Task instance with example runs
        """

Multi-Shot Chain-of-Thought

Multiple examples with chain-of-thought reasoning.

class MultiShotChainOfThoughtPromptBuilder(BasePromptBuilder):
    """
    Multi-shot prompts with chain-of-thought reasoning.

    Many examples with detailed reasoning steps.
    Best for very complex tasks requiring extensive demonstration.
    """

    def __init__(self, task):
        """
        Initialize multi-shot CoT prompt builder.

        Parameters:
        - task: Task instance with many example runs
        """

Saved Prompt Builder

Use saved/custom prompts from task configuration.

class SavedPromptBuilder(BasePromptBuilder):
    """
    Use saved/custom prompts from task.

    Loads prompt content from saved prompt configuration,
    allowing fully customized prompt templates.
    """

    def __init__(self, task, prompt_id: str):
        """
        Initialize saved prompt builder.

        Parameters:
        - task: Task instance
        - prompt_id (str): ID of saved prompt to use
        """

    def build_prompt(self, task_input: str) -> str:
        """
        Build prompt from saved template.

        Parameters:
        - task_input (str): Input data

        Returns:
        str: Prompt from saved template
        """

Repairs Prompt Builder

Prompt builder for repairing invalid task outputs.

class RepairsPromptBuilder(BasePromptBuilder):
    """
    Repair-focused prompts for fixing invalid outputs.

    Used to correct outputs that:
    - Failed schema validation
    - Don't meet requirements
    - Need formatting fixes
    """

    def __init__(self, task, original_input: str, invalid_output: str, error: str):
        """
        Initialize repairs prompt builder.

        Parameters:
        - task: Task instance
        - original_input (str): Original task input
        - invalid_output (str): Invalid output to repair
        - error (str): Error message describing the issue
        """

Task Run Config Prompt Builder

Task run-specific prompt configuration.

class TaskRunConfigPromptBuilder(BasePromptBuilder):
    """
    Task run-specific prompt builder.

    Uses configuration from specific task run for custom prompt behavior.
    """

    def __init__(self, task, task_run_config: dict):
        """
        Initialize task run config prompt builder.

        Parameters:
        - task: Task instance
        - task_run_config (dict): Configuration for this specific run
        """

Fine-Tune Prompt Builder

Prompts formatted for fine-tuning datasets.

class FineTunePromptBuilder(BasePromptBuilder):
    """
    Fine-tune formatted prompts.

    Formats prompts specifically for fine-tuning training data,
    ensuring consistency with fine-tuned model expectations.
    """

    def __init__(self, task):
        """
        Initialize fine-tune prompt builder.

        Parameters:
        - task: Task instance
        """

Usage Examples

Using Different Prompt Strategies

from kiln_ai.datamodel import Task
from kiln_ai.adapters.prompt_builders import prompt_builder_from_id
from kiln_ai.adapters import adapter_for_task

# Create task
task = Task(
    name="question_answerer",
    instruction="Answer the question accurately and concisely."
)

# Try different prompt strategies
strategies = ["simple", "few_shot", "cot", "few_shot_cot"]

for strategy in strategies:
    builder = prompt_builder_from_id(strategy, task)
    prompt = builder.build_prompt("What is machine learning?")
    print(f"\n{strategy.upper()} PROMPT:")
    print(prompt)

Simple Prompt

from kiln_ai.adapters.prompt_builders import SimplePromptBuilder
from kiln_ai.datamodel import Task

task = Task(
    name="translator",
    instruction="Translate the text to French."
)

builder = SimplePromptBuilder(task)
prompt = builder.build_prompt("Hello, how are you?")
print(prompt)
# Output:
# Translate the text to French.
#
# Input: Hello, how are you?

Few-Shot Learning

from kiln_ai.datamodel import Task, TaskRun, TaskOutput
from kiln_ai.adapters.prompt_builders import FewShotPromptBuilder

# Create task with example runs
task = Task(
    name="sentiment_classifier",
    instruction="Classify the sentiment as positive, negative, or neutral."
)

# Add example runs
examples = [
    ("I love this product!", "positive"),
    ("This is terrible.", "negative"),
    ("It's okay.", "neutral")
]

for input_text, output_text in examples:
    run = TaskRun(
        parent=task,
        input=input_text,
        output=TaskOutput(output=output_text)
    )
    run.save_to_file()

# Build few-shot prompt
builder = FewShotPromptBuilder(task)
prompt = builder.build_prompt("This is amazing!")
print(prompt)
# Includes examples from the task runs

Chain-of-Thought Reasoning

from kiln_ai.adapters.prompt_builders import (
    SimpleChainOfThoughtPromptBuilder,
    chain_of_thought_prompt
)
from kiln_ai.datamodel import Task

task = Task(
    name="math_solver",
    instruction="Solve the math problem step by step."
)

# Method 1: Use builder
builder = SimpleChainOfThoughtPromptBuilder(task)
prompt = builder.build_prompt("What is 25% of 80?")
print(prompt)

# Method 2: Use helper function
cot_text = chain_of_thought_prompt(task)
print(f"\nCoT instructions:\n{cot_text}")

Saved Custom Prompts

from kiln_ai.datamodel import Task, Prompt
from kiln_ai.adapters.prompt_builders import SavedPromptBuilder

# Create task
task = Task(
    name="creative_writer",
    instruction="Write creative content."
)
task.save_to_file()

# Create saved prompt
saved_prompt = Prompt(
    parent=task,
    name="story_prompt",
    content="""You are a creative storyteller.

Given a topic, write an engaging short story.

Topic: {input}

Story:"""
)
saved_prompt.save_to_file()

# Use saved prompt
builder = SavedPromptBuilder(task, saved_prompt.id)
prompt = builder.build_prompt("space exploration")
print(prompt)

Combining with Adapters

from kiln_ai.datamodel import Task
from kiln_ai.adapters import adapter_for_task
from kiln_ai.adapters.prompt_builders import prompt_builder_from_id

task = Task(
    name="code_explainer",
    instruction="Explain what the code does."
)

# Use specific prompt strategy with adapter
adapter = adapter_for_task(
    task,
    model_name="gpt_4o",
    provider="openai"
)

# The adapter will use the specified prompt strategy
# by default, but you can also build prompts manually
builder = prompt_builder_from_id("cot", task)
custom_prompt = builder.build_prompt("def fibonacci(n): ...")

# Use with adapter
result = await adapter.invoke("def fibonacci(n): ...")

Repair Prompts

from kiln_ai.adapters.prompt_builders import RepairsPromptBuilder
from kiln_ai.datamodel import Task
import json

task = Task(
    name="json_generator",
    instruction="Generate valid JSON.",
    output_json_schema=json.dumps({
        "type": "object",
        "properties": {
            "name": {"type": "string"},
            "age": {"type": "integer"}
        }
    })
)

# Original attempt produced invalid output
original_input = "John, 30 years old"
invalid_output = '{"name": "John", "age": "thirty"}'  # age should be int
error = "Field 'age' must be integer, got string"

# Build repair prompt
builder = RepairsPromptBuilder(task, original_input, invalid_output, error)
repair_prompt = builder.build_prompt(original_input)
print(repair_prompt)
# Includes original input, invalid output, and error details

Multi-Shot for Complex Tasks

from kiln_ai.datamodel import Task, TaskRun, TaskOutput
from kiln_ai.adapters.prompt_builders import MultiShotPromptBuilder

# Complex task requiring many examples
task = Task(
    name="code_reviewer",
    instruction="Review code and provide detailed feedback."
)

# Add many example runs (8+)
for i in range(10):
    run = TaskRun(
        parent=task,
        input=f"# Example code {i}\n...",
        output=TaskOutput(output=f"Review {i}: ...")
    )
    run.save_to_file()

# Build multi-shot prompt with many examples
builder = MultiShotPromptBuilder(task)
prompt = builder.build_prompt("def buggy_function(): ...")
print(f"Prompt includes {len(task.runs())} examples")

Comparing Strategies

from kiln_ai.datamodel import Task
from kiln_ai.adapters import adapter_for_task
from kiln_ai.adapters.prompt_builders import prompt_builder_from_id

async def compare_strategies(task, input_data):
    strategies = ["simple", "few_shot", "cot", "few_shot_cot"]
    results = {}

    for strategy in strategies:
        # Use different prompt strategy
        builder = prompt_builder_from_id(strategy, task)

        # Create adapter (would use the strategy internally)
        adapter = adapter_for_task(task, model_name="gpt_4o", provider="openai")

        # Run task
        result = await adapter.invoke(input_data)
        results[strategy] = result.output

    return results

# Compare outputs
task = Task.load_from_file("path/to/task.kiln")
comparison = await compare_strategies(task, "test input")

for strategy, output in comparison.items():
    print(f"\n{strategy}:")
    print(output)

Install with Tessl CLI

npx tessl i tessl/pypi-kiln-ai

docs

configuration.md

datamodel.md

evaluation.md

fine-tuning.md

index.md

models.md

prompts.md

rag-embeddings.md

task-execution.md

tools.md

tile.json