CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/npm-openai

The official TypeScript library for the OpenAI API

Pending

Quality

Pending

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

Overview
Eval results
Files

fine-tuning.mddocs/

Fine-Tuning

Create and manage fine-tuning jobs to adapt OpenAI models to your specific use case with your own training data. Fine-tuning supports supervised learning, Direct Preference Optimization (DPO), and reinforcement learning methods.

Capabilities

Fine-Tuning Job Management

Complete lifecycle management for fine-tuning jobs, from creation through monitoring to completion. Control job execution with pause, resume, and cancel operations.

function create(params: FineTuningJobCreateParams): Promise<FineTuningJob>;
function retrieve(jobID: string): Promise<FineTuningJob>;
function list(params?: FineTuningJobListParams): Promise<FineTuningJobsPage>;
function cancel(jobID: string): Promise<FineTuningJob>;
function pause(jobID: string): Promise<FineTuningJob>;
function resume(jobID: string): Promise<FineTuningJob>;

Available at: client.fineTuning.jobs

Job Monitoring and Events

Track job progress through detailed event logs with status updates, metrics, and error information. Events include training progress, validation results, and completion notifications.

function listEvents(jobID: string, params?: JobEventListParams): Promise<FineTuningJobEventsPage>;

Available at: client.fineTuning.jobs.listEvents()

Checkpoint Management

Access intermediate model checkpoints during fine-tuning to evaluate progress and use partially-trained models. Each checkpoint includes training metrics at specific steps.

function list(jobID: string, params?: CheckpointListParams): Promise<FineTuningJobCheckpointsPage>;

Available at: client.fineTuning.jobs.checkpoints.list()

Checkpoint Permissions

Manage sharing permissions for fine-tuned checkpoints, allowing you to grant or revoke access to specific checkpoints for other users or organizations.

// Create permission for a checkpoint
function create(
  fineTunedModelCheckpoint: string,
  body: PermissionCreateParams,
  options?: RequestOptions
): Promise<PermissionCreateResponsesPage>;

// Retrieve permission details
function retrieve(
  fineTunedModelCheckpoint: string,
  query?: PermissionRetrieveParams,
  options?: RequestOptions
): Promise<PermissionRetrieveResponse>;

// Delete/revoke permission
function delete(
  permissionID: string,
  params: PermissionDeleteParams,
  options?: RequestOptions
): Promise<PermissionDeleteResponse>;

Available at: client.fineTuning.checkpoints.permissions

Alpha Features - Grader Validation

Experimental grader tools for validating and testing graders before using them in fine-tuning jobs. These features are in alpha and subject to change.

// Run a grader on test data
function run(body: GraderRunParams): Promise<GraderRunResponse>;

// Validate grader configuration
function validate(body: GraderValidateParams): Promise<GraderValidateResponse>;

interface GraderRunParams {
  grader: StringCheckGrader | TextSimilarityGrader | PythonGrader | ScoreModelGrader | LabelModelGrader | MultiGrader;
  model_sample: string;
  item?: unknown;
}

interface GraderValidateParams {
  grader: StringCheckGrader | TextSimilarityGrader | PythonGrader | ScoreModelGrader | LabelModelGrader | MultiGrader;
}

Available at: client.fineTuning.alpha.graders

Note: These are alpha/experimental features. The API may change in future versions.


Core Types

FineTuningJob { .api }

Represents a fine-tuning job that has been created through the API.

interface FineTuningJob {
  id: string;
  created_at: number;
  finished_at: number | null;
  error: FineTuningJob.Error | null;
  fine_tuned_model: string | null;
  hyperparameters: FineTuningJob.Hyperparameters;
  model: string;
  object: 'fine_tuning.job';
  organization_id: string;
  result_files: Array<string>;
  seed: number;
  status: 'validating_files' | 'queued' | 'running' | 'succeeded' | 'failed' | 'cancelled';
  trained_tokens: number | null;
  training_file: string;
  validation_file: string | null;
  estimated_finish?: number | null;
  integrations?: Array<FineTuningJobIntegration> | null;
  metadata?: Record<string, string> | null;
  method?: FineTuningJob.Method;
}

namespace FineTuningJob {
  interface Error { { .api }
    code: string;
    message: string;
    param: string | null;
  }

  interface Hyperparameters { { .api }
    batch_size?: 'auto' | number | null;
    learning_rate_multiplier?: 'auto' | number;
    n_epochs?: 'auto' | number;
  }

  interface Method { { .api }
    type: 'supervised' | 'dpo' | 'reinforcement';
    dpo?: DpoMethod;
    reinforcement?: ReinforcementMethod;
    supervised?: SupervisedMethod;
  }
}

FineTuningJobEvent { .api }

Event log entry for a fine-tuning job containing status updates and metrics.

interface FineTuningJobEvent {
  id: string;
  created_at: number;
  level: 'info' | 'warn' | 'error';
  message: string;
  object: 'fine_tuning.job.event';
  data?: unknown;
  type?: 'message' | 'metrics';
}

FineTuningJobCheckpoint { .api }

Represents an intermediate model checkpoint during a fine-tuning job, ready for evaluation or use.

interface FineTuningJobCheckpoint {
  id: string;
  created_at: number;
  fine_tuned_model_checkpoint: string;
  fine_tuning_job_id: string;
  metrics: FineTuningJobCheckpoint.Metrics;
  object: 'fine_tuning.job.checkpoint';
  step_number: number;
}

namespace FineTuningJobCheckpoint {
  interface Metrics { { .api }
    full_valid_loss?: number;
    full_valid_mean_token_accuracy?: number;
    step?: number;
    train_loss?: number;
    train_mean_token_accuracy?: number;
    valid_loss?: number;
    valid_mean_token_accuracy?: number;
  }
}

Training Method Types

SupervisedMethod { .api }

Standard supervised fine-tuning configuration for training on input-output pairs.

interface SupervisedMethod {
  hyperparameters?: SupervisedHyperparameters;
}

interface SupervisedHyperparameters { { .api }
  batch_size?: 'auto' | number;
  learning_rate_multiplier?: 'auto' | number;
  n_epochs?: 'auto' | number;
}

DpoMethod { .api }

Direct Preference Optimization configuration for training with preference pairs (preferred vs. dispreferred responses).

interface DpoMethod {
  hyperparameters?: DpoHyperparameters;
}

interface DpoHyperparameters { { .api }
  batch_size?: 'auto' | number;
  beta?: 'auto' | number;
  learning_rate_multiplier?: 'auto' | number;
  n_epochs?: 'auto' | number;
}

ReinforcementMethod { .api }

Reinforcement learning configuration for training with reward scoring.

interface ReinforcementMethod {
  grader: StringCheckGrader | TextSimilarityGrader | PythonGrader | ScoreModelGrader | MultiGrader;
  hyperparameters?: ReinforcementHyperparameters;
}

interface ReinforcementHyperparameters {
  batch_size?: 'auto' | number;
  compute_multiplier?: 'auto' | number;
  eval_interval?: 'auto' | number;
  eval_samples?: 'auto' | number;
  learning_rate_multiplier?: 'auto' | number;
  n_epochs?: 'auto' | number;
  reasoning_effort?: 'default' | 'low' | 'medium' | 'high';
}

Grader Types

Graders are used in reinforcement learning fine-tuning to automatically score model outputs and provide rewards for training.

LabelModelGrader { .api }

Uses a language model to assign labels to evaluation items. Useful for classification-style evaluation where outputs should fall into specific categories.

interface LabelModelGrader {
  input: Array<LabelModelGraderInput>;
  labels: string[];
  model: string;
  name: string;
  passing_labels: string[];
  type: 'label_model';
}

interface LabelModelGraderInput {
  content: string | ResponseInputText | OutputText | InputImage | ResponseInputAudio | Array<unknown>;
  role: 'user' | 'assistant' | 'system' | 'developer';
  type?: 'message';
}

interface OutputText {
  text: string;
  type: 'output_text';
}

interface InputImage {
  image_url: string;
  type: 'input_image';
  detail?: string;
}

Properties:

  • input: Array of message inputs to the grader model, can include template strings
  • labels: Available labels to assign to each evaluation item
  • model: The model to use for evaluation (must support structured outputs)
  • name: Identifier for the grader
  • passing_labels: Labels that indicate a passing result (must be subset of labels)
  • type: Always 'label_model'

StringCheckGrader { .api }

Performs string comparison operations between input and reference text.

interface StringCheckGrader {
  input: string;
  name: string;
  operation: 'eq' | 'ne' | 'like' | 'ilike';
  reference: string;
  type: 'string_check';
}

Properties:

  • operation: 'eq' (equals), 'ne' (not equals), 'like' (SQL LIKE), 'ilike' (case-insensitive LIKE)

TextSimilarityGrader { .api }

Grades text based on similarity metrics. Supports various metrics for comparing model output with reference text.

interface TextSimilarityGrader {
  evaluation_metric: 'cosine' | 'fuzzy_match' | 'bleu' | 'gleu' | 'meteor' | 'rouge_1' | 'rouge_2' | 'rouge_3' | 'rouge_4' | 'rouge_5' | 'rouge_l';
  input: string;
  name: string;
  reference: string;
  type: 'text_similarity';
}

PythonGrader { .api }

Executes custom Python code for evaluation. Provides maximum flexibility for complex grading logic.

interface PythonGrader {
  name: string;
  source: string;
  type: 'python';
  image_tag?: string;
}

Properties:

  • source: Python code to execute for grading
  • image_tag: Optional Docker image tag for the Python environment

ScoreModelGrader { .api }

Uses a language model to assign numerical scores to outputs. Useful for open-ended evaluation criteria.

interface ScoreModelGrader {
  input: Array<ScoreModelGraderInput>;
  model: string;
  name: string;
  type: 'score_model';
  range?: [number, number];
  sampling_params?: SamplingParams;
}

interface ScoreModelGraderInput {
  content: string | ResponseInputText | OutputText | InputImage | ResponseInputAudio | Array<unknown>;
  role: 'user' | 'assistant' | 'system' | 'developer';
  type?: 'message';
}

interface SamplingParams {
  max_completions_tokens?: number | null;
  reasoning_effort?: 'none' | 'minimal' | 'low' | 'medium' | 'high' | null;
  seed?: number | null;
  temperature?: number | null;
  top_p?: number | null;
}

Properties:

  • range: Score range (defaults to [0, 1])
  • sampling_params: Optional parameters for controlling model behavior

MultiGrader { .api }

Combines multiple graders using a formula to produce a final score.

interface MultiGrader {
  calculate_output: string;
  graders: StringCheckGrader | TextSimilarityGrader | PythonGrader | ScoreModelGrader | LabelModelGrader;
  name: string;
  type: 'multi';
}

Properties:

  • calculate_output: Formula to calculate final score from grader results
  • graders: The individual graders to combine

Pagination Types

type FineTuningJobsPage = CursorPage<FineTuningJob>;
type FineTuningJobEventsPage = CursorPage<FineTuningJobEvent>;
type FineTuningJobCheckpointsPage = CursorPage<FineTuningJobCheckpoint>;

Examples

Creating a Fine-Tuning Job (Supervised)

Train a model using standard supervised learning with input-output pairs.

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

// Create a fine-tuning job
const job = await client.fineTuning.jobs.create({
  model: 'gpt-4o-mini',
  training_file: 'file-abc123', // JSONL file with training data
  method: {
    type: 'supervised',
    supervised: {
      hyperparameters: {
        batch_size: 8,
        learning_rate_multiplier: 1.0,
        n_epochs: 3,
      },
    },
  },
  suffix: 'my-fine-tuned-model',
});

console.log(`Job created: ${job.id}`);
console.log(`Status: ${job.status}`);
console.log(`Model: ${job.model}`);

Creating a DPO Fine-Tuning Job

Train using Direct Preference Optimization with preference pairs.

const dpoJob = await client.fineTuning.jobs.create({
  model: 'gpt-4o-mini',
  training_file: 'file-dpo-pairs-123', // JSONL with preference pairs
  method: {
    type: 'dpo',
    dpo: {
      hyperparameters: {
        batch_size: 16,
        beta: 0.1,
        learning_rate_multiplier: 0.5,
        n_epochs: 1,
      },
    },
  },
});

console.log(`DPO Job: ${dpoJob.id}`);

Creating a Reinforcement Learning Fine-Tuning Job

Train using reinforcement learning with reward scoring.

const rlJob = await client.fineTuning.jobs.create({
  model: 'gpt-4o-mini',
  training_file: 'file-rl-data-123',
  method: {
    type: 'reinforcement',
    reinforcement: {
      grader: {
        type: 'string_check', // or 'text_similarity', 'python', 'score_model', 'multi'
        name: 'string-check-grader',
        input: '{{ sample.output }}',
        operation: 'eq',
        reference: 'expected_output',
      },
      hyperparameters: {
        batch_size: 'auto',
        n_epochs: 2,
        learning_rate_multiplier: 0.8,
        eval_interval: 100,
        eval_samples: 50,
      },
    },
  },
});

console.log(`RL Job: ${rlJob.id}`);

Retrieving Job Details

Get complete information about a specific fine-tuning job.

const jobId = 'ft-AF1WoRqd3aJAHsqc9NY7iL8F';
const job = await client.fineTuning.jobs.retrieve(jobId);

console.log(`Job Status: ${job.status}`);
console.log(`Created: ${new Date(job.created_at * 1000).toISOString()}`);
console.log(`Fine-tuned Model: ${job.fine_tuned_model}`);
console.log(`Trained Tokens: ${job.trained_tokens}`);

if (job.error) {
  console.error(`Error: ${job.error.message}`);
}

Monitoring Job Progress with Events

Track job execution through event logs including training metrics.

const jobId = 'ft-AF1WoRqd3aJAHsqc9NY7iL8F';

// Iterate through all events
for await (const event of client.fineTuning.jobs.listEvents(jobId)) {
  console.log(`[${event.level}] ${event.message}`);

  if (event.type === 'metrics') {
    console.log('Metrics:', event.data);
  }
}

// List with pagination parameters
const eventPage = await client.fineTuning.jobs.listEvents(jobId, {
  limit: 10,
});

console.log(`Retrieved ${eventPage.data.length} events`);

Working with Checkpoints

Access intermediate model checkpoints and their metrics.

const jobId = 'ft-AF1WoRqd3aJAHsqc9NY7iL8F';

// Get all checkpoints for a job
for await (const checkpoint of client.fineTuning.jobs.checkpoints.list(jobId)) {
  console.log(`Checkpoint: ${checkpoint.fine_tuned_model_checkpoint}`);
  console.log(`Step: ${checkpoint.step_number}`);
  console.log(`Training Loss: ${checkpoint.metrics.train_loss}`);
  console.log(`Validation Loss: ${checkpoint.metrics.valid_loss}`);
  console.log(`Token Accuracy: ${checkpoint.metrics.valid_mean_token_accuracy}`);
}

// List checkpoints with pagination
const checkpointPage = await client.fineTuning.jobs.checkpoints.list(jobId, {
  limit: 5,
});

const bestCheckpoint = checkpointPage.data.reduce((best, current) => {
  return (current.metrics.valid_loss || 0) < (best.metrics.valid_loss || 0)
    ? current
    : best;
});

console.log(`Best checkpoint by validation loss: ${bestCheckpoint.id}`);

Listing Fine-Tuning Jobs

Retrieve all fine-tuning jobs in your organization with filtering.

// List all jobs
for await (const job of client.fineTuning.jobs.list()) {
  console.log(`${job.id}: ${job.status} (Model: ${job.model})`);
}

// List with filters
const jobsPage = await client.fineTuning.jobs.list({
  limit: 20,
});

const runningJobs = jobsPage.data.filter(j => j.status === 'running');
console.log(`Active jobs: ${runningJobs.length}`);

// Filter by metadata
const metadataFilteredJobs = await client.fineTuning.jobs.list({
  metadata: {
    'project': 'chatbot-v2',
  },
});

Controlling Job Execution

Pause, resume, and cancel jobs as needed.

const jobId = 'ft-AF1WoRqd3aJAHsqc9NY7iL8F';

// Pause a running job
const pausedJob = await client.fineTuning.jobs.pause(jobId);
console.log(`Job paused: ${pausedJob.status}`); // status: 'paused'

// Wait a bit...
await new Promise(resolve => setTimeout(resolve, 5000));

// Resume the job
const resumedJob = await client.fineTuning.jobs.resume(jobId);
console.log(`Job resumed: ${resumedJob.status}`); // status: 'running'

// Cancel a job (can cancel running, queued, or paused jobs)
const cancelledJob = await client.fineTuning.jobs.cancel(jobId);
console.log(`Job cancelled: ${cancelledJob.status}`); // status: 'cancelled'

Training Data Format

Fine-tuning data must be formatted as JSONL (JSON Lines) files. Different formats are required depending on the training method and model type.

Supervised Training - Chat Format

For chat-based models like GPT-4 and GPT-3.5-turbo with supervised learning:

{"messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Question about biology"}, {"role": "assistant", "content": "The answer is..."}]}
{"messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Question about physics"}, {"role": "assistant", "content": "The answer is..."}]}

Supervised Training - Completions Format

For models using completions format:

{"prompt": "Write a poem about:", "completion": " nature and its beauty"}
{"prompt": "What is the capital of:", "completion": " France? Paris"}

DPO Training - Preference Format

For Direct Preference Optimization with preference pairs:

{"messages": [{"role": "user", "content": "Question"}], "preferred": {"content": "Better answer"}, "dispreferred": {"content": "Worse answer"}}
{"messages": [{"role": "user", "content": "Another question"}], "preferred": {"content": "Preferred response"}, "dispreferred": {"content": "Dispreferred response"}}

Reinforcement Learning Format

For RL training with prompts (rewards are assigned via grader):

{"messages": [{"role": "user", "content": "Write a story about adventure"}]}
{"messages": [{"role": "user", "content": "Explain quantum computing"}]}

Data Preparation Best Practices

import * as fs from 'fs';

// Example: Convert CSV training data to JSONL format
function csvToJsonl(csvFilePath: string): void {
  const lines = fs.readFileSync(csvFilePath, 'utf-8').split('\n');
  const [header, ...rows] = lines;
  const headers = header.split(',');

  const jsonlLines = rows
    .filter(row => row.trim())
    .map(row => {
      const values = row.split(',');
      const obj: any = {};
      headers.forEach((h, i) => {
        obj[h.trim()] = values[i]?.trim();
      });
      return JSON.stringify({
        messages: [
          { role: 'user', content: obj.prompt },
          { role: 'assistant', content: obj.completion },
        ],
      });
    })
    .join('\n');

  fs.writeFileSync('training_data.jsonl', jsonlLines);
}

// Validate JSONL format
function validateJsonl(jsonlPath: string): boolean {
  const lines = fs.readFileSync(jsonlPath, 'utf-8').split('\n');
  return lines
    .filter(line => line.trim())
    .every(line => {
      try {
        JSON.parse(line);
        return true;
      } catch {
        return false;
      }
    });
}

// Upload training file
async function uploadTrainingFile(client: OpenAI, jsonlPath: string): Promise<string> {
  const fileContent = fs.createReadStream(jsonlPath);
  const response = await client.files.create({
    file: fileContent,
    purpose: 'fine-tune',
  });
  return response.id;
}

Hyperparameter Tuning

Fine-tuning outcomes depend heavily on hyperparameter selection. Here's a guide to tuning each parameter.

Batch Size

Controls how many examples are processed before updating model weights.

  • Effect: Larger batch sizes lead to more stable training but slower convergence
  • 'auto' (recommended): OpenAI automatically selects based on dataset
  • Typical Range: 1-256
  • Tradeoff: Larger batches = lower variance, less frequent updates
  • Guidance: Start with 'auto', then experiment with 8, 16, 32 if needed
// Conservative tuning with larger batch size
hyperparameters: {
  batch_size: 32, // More stable, slower
}

// Aggressive tuning with smaller batch size
hyperparameters: {
  batch_size: 8, // Faster convergence, more noise
}

Learning Rate Multiplier

Scales the base learning rate for the fine-tuning process.

  • Effect: Controls the magnitude of weight updates
  • Typical Range: 0.02 to 2.0
  • 'auto' (recommended): Automatically selected based on model
  • Guidance:
    • < 1.0: More conservative, less overfitting risk
    • 1.0: Default, balanced training
    • 1.0: More aggressive, faster convergence

// Conservative fine-tuning (prefer stability)
hyperparameters: {
  learning_rate_multiplier: 0.5, // Half the default rate
}

// Aggressive fine-tuning (prefer speed)
hyperparameters: {
  learning_rate_multiplier: 2.0, // Double the default rate
}

Number of Epochs

How many complete passes through the training data to perform.

  • Effect: More epochs generally improve performance but risk overfitting
  • Typical Range: 1-10
  • **'auto': Automatically selected
  • Guidance:
    • 1 epoch: Fast, may underfit
    • 3-4 epochs: Balanced (recommended)
    • 5 epochs: Risk of overfitting on small datasets

// Small dataset - few epochs to avoid overfitting
hyperparameters: {
  n_epochs: 1,
}

// Large dataset - more epochs for better convergence
hyperparameters: {
  n_epochs: 4,
}

DPO-Specific: Beta Parameter

The beta value weights the penalty between policy and reference model.

  • Effect: Higher beta enforces stronger adherence to preference pairs
  • Typical Range: 0.05 to 0.5
  • 'auto' (recommended): Automatically tuned
  • Guidance:
    • Low beta (0.05): More exploration, less constraint
    • High beta (0.3+): Strict preference alignment
dpo: {
  hyperparameters: {
    beta: 0.1, // Moderate preference alignment
  },
}

Hyperparameter Tuning Workflow

async function tuneFinetuningModel(
  client: OpenAI,
  trainingFile: string,
  validationFile: string,
): Promise<string> {
  const configurations = [
    {
      name: 'conservative',
      batch_size: 32,
      learning_rate_multiplier: 0.5,
      n_epochs: 2,
    },
    {
      name: 'balanced',
      batch_size: 16,
      learning_rate_multiplier: 1.0,
      n_epochs: 3,
    },
    {
      name: 'aggressive',
      batch_size: 8,
      learning_rate_multiplier: 2.0,
      n_epochs: 4,
    },
  ];

  const results: Array<{ config: string; jobId: string; metrics: any }> = [];

  for (const config of configurations) {
    console.log(`Starting ${config.name} configuration...`);

    const job = await client.fineTuning.jobs.create({
      model: 'gpt-4o-mini',
      training_file: trainingFile,
      validation_file: validationFile,
      method: {
        type: 'supervised',
        supervised: {
          hyperparameters: {
            batch_size: config.batch_size,
            learning_rate_multiplier: config.learning_rate_multiplier,
            n_epochs: config.n_epochs,
          },
        },
      },
      suffix: `tune-${config.name}`,
      metadata: {
        'experiment': 'hyperparameter-tuning',
        'config': config.name,
      },
    });

    results.push({
      config: config.name,
      jobId: job.id,
      metrics: {
        batch_size: config.batch_size,
        learning_rate_multiplier: config.learning_rate_multiplier,
        n_epochs: config.n_epochs,
      },
    });
  }

  // Wait for jobs and compare results
  for (const result of results) {
    let job = await client.fineTuning.jobs.retrieve(result.jobId);

    // Poll until completion
    while (job.status === 'running' || job.status === 'queued') {
      await new Promise(resolve => setTimeout(resolve, 30000)); // Wait 30s
      job = await client.fineTuning.jobs.retrieve(result.jobId);
    }

    if (job.status === 'succeeded') {
      console.log(`${result.config} job succeeded: ${job.fine_tuned_model}`);
      console.log(`  Trained tokens: ${job.trained_tokens}`);
    }
  }

  return results[0].jobId; // Return first result ID
}

Advanced Usage

Using Metadata for Job Organization

Tag and filter jobs with custom metadata for better organization and tracking.

// Create job with metadata
const job = await client.fineTuning.jobs.create({
  model: 'gpt-4o-mini',
  training_file: 'file-123',
  metadata: {
    'project': 'customer-support',
    'version': '1.0',
    'team': 'ai-products',
    'environment': 'production',
  },
});

// Later, filter jobs by metadata
const productionJobs = await client.fineTuning.jobs.list({
  metadata: {
    'environment': 'production',
  },
});

for await (const job of productionJobs) {
  console.log(`${job.id} - ${job.metadata?.project}`);
}

Weights and Biases Integration

Monitor fine-tuning jobs in real-time using Weights and Biases.

const job = await client.fineTuning.jobs.create({
  model: 'gpt-4o-mini',
  training_file: 'file-123',
  integrations: [
    {
      type: 'wandb',
      wandb: {
        project: 'openai-fine-tuning',
        entity: 'my-team',
        name: 'gpt4-mini-v1',
        tags: ['production', 'customer-support'],
      },
    },
  ],
});

console.log(`Monitor at: https://wandb.ai/my-team/openai-fine-tuning`);

Reproducible Training

Use seeds for reproducible fine-tuning results.

const seed = 42;

// Job 1
const job1 = await client.fineTuning.jobs.create({
  model: 'gpt-4o-mini',
  training_file: 'file-123',
  seed: seed,
  suffix: 'run1',
});

// Job 2 with same seed produces identical results
const job2 = await client.fineTuning.jobs.create({
  model: 'gpt-4o-mini',
  training_file: 'file-123',
  seed: seed,
  suffix: 'run2',
});

// Both jobs should produce equivalent models

Validation File Usage

Provide validation data to monitor generalization during training.

const job = await client.fineTuning.jobs.create({
  model: 'gpt-4o-mini',
  training_file: 'file-train-123',
  validation_file: 'file-val-456', // Optional but recommended
  method: {
    type: 'supervised',
    supervised: {
      hyperparameters: {
        batch_size: 16,
        n_epochs: 3,
        learning_rate_multiplier: 1.0,
      },
    },
  },
});

// Monitor validation metrics in events
for await (const event of client.fineTuning.jobs.listEvents(job.id)) {
  if (event.type === 'metrics' && event.data?.validating) {
    console.log('Validation Metrics:', event.data);
  }
}

Checkpoint-Based Model Selection

Use checkpoints to select the best intermediate model rather than the final one.

async function findBestCheckpoint(
  client: OpenAI,
  jobId: string,
): Promise<string> {
  let bestCheckpoint: any = null;
  let bestValidationLoss = Infinity;

  for await (const checkpoint of client.fineTuning.jobs.checkpoints.list(jobId)) {
    const validationLoss = checkpoint.metrics.valid_loss || Infinity;

    if (validationLoss < bestValidationLoss) {
      bestValidationLoss = validationLoss;
      bestCheckpoint = checkpoint;
    }
  }

  if (bestCheckpoint) {
    console.log(
      `Best checkpoint at step ${bestCheckpoint.step_number}: ${bestCheckpoint.fine_tuned_model_checkpoint}`,
    );
    return bestCheckpoint.fine_tuned_model_checkpoint;
  }

  throw new Error('No checkpoints found');
}

// Use the checkpoint model
const bestModel = await findBestCheckpoint(client, 'ft-123');
const completion = await client.chat.completions.create({
  model: bestModel,
  messages: [{ role: 'user', content: 'Hello' }],
});

Long-Running Job Polling

Monitor job completion with exponential backoff polling.

async function pollJobUntilComplete(
  client: OpenAI,
  jobId: string,
  maxWaitMs = 7200000, // 2 hours
): Promise<FineTuningJob> {
  const startTime = Date.now();
  let pollInterval = 5000; // Start at 5 seconds
  const maxPollInterval = 60000; // Cap at 60 seconds

  while (Date.now() - startTime < maxWaitMs) {
    const job = await client.fineTuning.jobs.retrieve(jobId);

    if (job.status === 'succeeded' || job.status === 'failed' || job.status === 'cancelled') {
      return job;
    }

    console.log(`Job ${jobId} status: ${job.status}`);
    if (job.status === 'running' && job.estimated_finish) {
      const remaining = job.estimated_finish * 1000 - Date.now();
      console.log(`Estimated time remaining: ${Math.ceil(remaining / 1000)} seconds`);
    }

    await new Promise(resolve => setTimeout(resolve, pollInterval));

    // Exponential backoff
    pollInterval = Math.min(pollInterval * 1.5, maxPollInterval);
  }

  throw new Error(`Job ${jobId} did not complete within ${maxWaitMs}ms`);
}

// Usage
const completedJob = await pollJobUntilComplete(client, 'ft-123');
console.log(`Job completed with status: ${completedJob.status}`);

Bulk Job Monitoring

Track multiple fine-tuning jobs simultaneously.

async function monitorMultipleJobs(client: OpenAI, jobIds: string[]): Promise<void> {
  const statusMap = new Map<string, string>();
  jobIds.forEach(id => statusMap.set(id, 'unknown'));

  const updateStatus = async () => {
    for (const jobId of jobIds) {
      const job = await client.fineTuning.jobs.retrieve(jobId);
      statusMap.set(jobId, job.status);
    }
  };

  const allComplete = () =>
    Array.from(statusMap.values()).every(
      status =>
        status === 'succeeded' ||
        status === 'failed' ||
        status === 'cancelled',
    );

  while (!allComplete()) {
    await updateStatus();

    console.clear();
    console.log('Fine-Tuning Jobs Status:');
    for (const [id, status] of statusMap) {
      const symbol =
        status === 'succeeded'
          ? '✓'
          : status === 'failed'
            ? '✗'
            : status === 'running'
              ? '→'
              : '-';
      console.log(`${symbol} ${id}: ${status}`);
    }

    if (!allComplete()) {
      await new Promise(resolve => setTimeout(resolve, 30000)); // Check every 30s
    }
  }

  console.log('\nAll jobs completed!');
}

// Usage
await monitorMultipleJobs(client, [
  'ft-123',
  'ft-456',
  'ft-789',
]);

Supported Models

Fine-tuning is available for the following models:

  • gpt-4o-mini (Recommended for most use cases)
  • gpt-3.5-turbo
  • davinci-002
  • babbage-002

Model availability and capabilities may change. Check the OpenAI documentation for the most current list.


Error Handling

import { BadRequestError, NotFoundError } from 'openai';

async function createJobWithErrorHandling(
  client: OpenAI,
  trainingFile: string,
) {
  try {
    const job = await client.fineTuning.jobs.create({
      model: 'gpt-4o-mini',
      training_file: trainingFile,
    });
    return job;
  } catch (error) {
    if (error instanceof BadRequestError) {
      console.error('Invalid request:', error.message);
      // Usually validation errors in training data format
    } else if (error instanceof NotFoundError) {
      console.error('Training file not found:', error.message);
    } else {
      throw error;
    }
  }
}

// Monitor job for errors
const job = await client.fineTuning.jobs.retrieve(jobId);

if (job.status === 'failed' && job.error) {
  console.error(
    `Job failed: ${job.error.code} - ${job.error.message}`,
  );
  console.error(`Failed parameter: ${job.error.param}`);
}

See Also

  • Chat Completions - Use fine-tuned models for chat
  • Files and Uploads - Upload training data
  • Embeddings - Fine-tune embedding models

Install with Tessl CLI

npx tessl i tessl/npm-openai

docs

assistants.md

audio.md

batches-evals.md

chat-completions.md

client-configuration.md

containers.md

conversations.md

embeddings.md

files-uploads.md

fine-tuning.md

helpers-audio.md

helpers-zod.md

images.md

index.md

realtime.md

responses-api.md

vector-stores.md

videos.md

tile.json