CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/npm-openai

The official TypeScript library for the OpenAI API

Pending

Quality

Pending

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

Overview
Eval results
Files

vector-stores.mddocs/

Vector Stores

Core functionality for creating and managing vector stores with the OpenAI API. Vector stores enable semantic search and file management for AI assistants.

Overview

Vector stores are collections of processed files that can be used with the file_search tool in assistants. They support semantic searching, batch file operations, and flexible chunking strategies for text extraction and embedding.

Capabilities

Create Vector Store

Creates a new vector store with optional initial files and configuration.

function create(params: VectorStoreCreateParams): Promise<VectorStore>;

interface VectorStoreCreateParams {
  name?: string;
  description?: string;
  file_ids?: Array<string>;
  chunking_strategy?: FileChunkingStrategyParam;
  expires_after?: {
    anchor: 'last_active_at';
    days: number;
  };
  metadata?: Record<string, string> | null;
}

Example:

import { OpenAI } from 'openai';

const client = new OpenAI();

// Create a basic vector store
const store = await client.vectorStores.create({
  name: 'Product Documentation',
  description: 'API and product docs',
});

// Create with initial files
const storeWithFiles = await client.vectorStores.create({
  name: 'Support Docs',
  file_ids: ['file-id-1', 'file-id-2'],
  chunking_strategy: {
    type: 'static',
    static: {
      max_chunk_size_tokens: 800,
      chunk_overlap_tokens: 400,
    },
  },
});

// Create with auto-expiration
const expiringStore = await client.vectorStores.create({
  name: 'Temporary Data',
  expires_after: {
    anchor: 'last_active_at',
    days: 30,
  },
});

Parameters:

  • name - Display name for the vector store
  • description - Optional description
  • file_ids - Array of file IDs to attach (must be pre-uploaded)
  • chunking_strategy - Text chunking configuration (defaults to auto)
  • expires_after - Automatic expiration policy
  • metadata - Key-value pairs for custom data (max 16 entries)

Status Values:

  • completed - Ready for use
  • in_progress - Files are being processed
  • expired - Vector store has expired

Retrieve | Update | List | Delete | Search


Retrieve Vector Store

Fetches details about a specific vector store.

function retrieve(storeID: string): Promise<VectorStore>;

Example:

const store = await client.vectorStores.retrieve('vs_abc123');

console.log(store.id);
console.log(store.status); // 'completed' | 'in_progress' | 'expired'
console.log(store.file_counts);
// {
//   total: 5,
//   completed: 4,
//   failed: 0,
//   in_progress: 1,
//   cancelled: 0
// }
console.log(store.usage_bytes);
console.log(store.created_at); // Unix timestamp
console.log(store.last_active_at); // Unix timestamp or null

Returns: VectorStore { .api } with current file processing status and metadata.


Update Vector Store

Modifies an existing vector store's name, metadata, or expiration policy.

function update(
  storeID: string,
  params: VectorStoreUpdateParams
): Promise<VectorStore>;

interface VectorStoreUpdateParams {
  name?: string | null;
  expires_after?: {
    anchor: 'last_active_at';
    days: number;
  } | null;
  metadata?: Record<string, string> | null;
}

Example:

const updated = await client.vectorStores.update('vs_abc123', {
  name: 'Updated Store Name',
  metadata: {
    department: 'support',
    version: '2.0',
  },
});

// Update expiration policy
await client.vectorStores.update('vs_abc123', {
  expires_after: {
    anchor: 'last_active_at',
    days: 60,
  },
});

// Clear metadata
await client.vectorStores.update('vs_abc123', {
  metadata: null,
});

List Vector Stores

Retrieves paginated list of vector stores with optional filtering.

function list(
  params?: VectorStoreListParams
): Promise<VectorStoresPage>;

interface VectorStoreListParams {
  after?: string;  // Cursor for pagination
  before?: string; // Cursor for pagination
  limit?: number;  // Items per page (default 20)
  order?: 'asc' | 'desc'; // Sort by created_at
}

Example:

// List all stores
const page = await client.vectorStores.list();

for (const store of page.data) {
  console.log(`${store.name} (${store.id}): ${store.file_counts.completed}/${store.file_counts.total} files`);
}

// Paginate
if (page.hasNextPage()) {
  const nextPage = await page.getNextPage();
}

// Iterate all pages
for await (const page of (await client.vectorStores.list()).iterPages()) {
  for (const store of page.data) {
    console.log(store.name);
  }
}

// Sort by newest first
const newestStores = await client.vectorStores.list({
  order: 'desc',
  limit: 10,
});

// Iterate all stores across pages
for await (const store of await client.vectorStores.list()) {
  console.log(store.name);
}

Delete Vector Store

Permanently removes a vector store (files remain in system).

function delete(storeID: string): Promise<VectorStoreDeleted>;

interface VectorStoreDeleted {
  id: string;
  deleted: boolean;
  object: 'vector_store.deleted';
}

Example:

const result = await client.vectorStores.delete('vs_abc123');
console.log(result.deleted); // true

// Files are preserved - delete separately if needed
await client.files.delete('file-id-1');

Search Vector Store

Searches for relevant content chunks based on query and optional filters.

function search(
  storeID: string,
  params: VectorStoreSearchParams
): Promise<VectorStoreSearchResponsesPage>;

interface VectorStoreSearchParams {
  query: string | Array<string>;
  max_num_results?: number; // 1-50, default 20
  rewrite_query?: boolean;  // Rewrite for semantic search
  filters?: ComparisonFilter | CompoundFilter;
  ranking_options?: {
    ranker?: 'none' | 'auto' | 'default-2024-11-15';
    score_threshold?: number;
  };
}

Example:

// Basic search
const results = await client.vectorStores.search('vs_abc123', {
  query: 'How to authenticate API requests?',
  max_num_results: 5,
});

for (const result of results.data) {
  console.log(`File: ${result.filename}`);
  console.log(`Similarity: ${result.score}`);
  for (const chunk of result.content) {
    console.log(chunk.text);
  }
}

// Search with query rewriting
const rewritten = await client.vectorStores.search('vs_abc123', {
  query: 'auth stuff',
  rewrite_query: true, // Rewrites to better semantic query
});

// Search with filters
const filtered = await client.vectorStores.search('vs_abc123', {
  query: 'pricing',
  filters: {
    key: 'department',
    type: 'eq',
    value: 'sales',
  },
});

// Search multiple queries
const multi = await client.vectorStores.search('vs_abc123', {
  query: ['pricing', 'cost', 'billing'],
  max_num_results: 10,
});

// Advanced ranking options
const ranked = await client.vectorStores.search('vs_abc123', {
  query: 'documentation',
  ranking_options: {
    ranker: 'default-2024-11-15',
    score_threshold: 0.5,
  },
});

Returns: Search results with file references, content chunks, and similarity scores.


Vector Store Files

File management for vector stores with chunking and polling support.

Access via: client.vectorStores.files

Add File to Vector Store

Attaches an existing file to a vector store for processing.

function create(
  storeID: string,
  params: FileCreateParams
): Promise<VectorStoreFile>;

interface FileCreateParams {
  file_id: string;
  attributes?: Record<string, string | number | boolean> | null;
  chunking_strategy?: FileChunkingStrategyParam;
}

Example:

// Add file to vector store
const vsFile = await client.vectorStores.files.create('vs_abc123', {
  file_id: 'file-id-1',
});

console.log(vsFile.status); // 'in_progress', 'completed', 'failed', 'cancelled'
console.log(vsFile.usage_bytes);

// Add with custom chunking
const customChunked = await client.vectorStores.files.create('vs_abc123', {
  file_id: 'file-id-2',
  chunking_strategy: {
    type: 'static',
    static: {
      max_chunk_size_tokens: 1024,
      chunk_overlap_tokens: 200,
    },
  },
  attributes: {
    source: 'support-docs',
    version: '1.0',
  },
});

// Add with auto chunking (default)
await client.vectorStores.files.create('vs_abc123', {
  file_id: 'file-id-3',
  chunking_strategy: {
    type: 'auto',
  },
});

Retrieve File Details

Gets metadata and status for a file in a vector store.

function retrieve(
  vectorStoreId: string,
  fileId: string
): Promise<VectorStoreFile>;

Example:

const file = await client.vectorStores.files.retrieve('vs_abc123', 'file-id-1');

console.log(file.status); // 'in_progress' | 'completed' | 'failed' | 'cancelled'
console.log(file.usage_bytes);
console.log(file.created_at);

if (file.status === 'failed') {
  console.log(file.last_error?.code); // 'server_error' | 'unsupported_file' | 'invalid_file'
  console.log(file.last_error?.message);
}

File Statuses:

  • in_progress - Currently being processed
  • completed - Ready for search and use
  • failed - Processing error (check last_error)
  • cancelled - Processing was cancelled

Update File Metadata

Modifies attributes and metadata for a file.

function update(
  vectorStoreId: string,
  fileId: string,
  params: FileUpdateParams
): Promise<VectorStoreFile>;

interface FileUpdateParams {
  attributes: Record<string, string | number | boolean> | null;
}

Example:

// Update file attributes
const updated = await client.vectorStores.files.update('vs_abc123', 'file-id-1', {
  attributes: {
    category: 'billing',
    reviewed: true,
    priority: 1,
  },
});

// Clear attributes
await client.vectorStores.files.update('vs_abc123', 'file-id-1', {
  attributes: null,
});

List Files in Vector Store

Retrieves paginated list of files with filtering and sorting.

function list(
  storeID: string,
  params?: FileListParams
): Promise<VectorStoreFilesPage>;

interface FileListParams {
  after?: string;    // Cursor
  before?: string;   // Cursor
  limit?: number;    // Items per page
  filter?: 'in_progress' | 'completed' | 'failed' | 'cancelled';
  order?: 'asc' | 'desc'; // Sort by created_at
}

Example:

// List all files
const page = await client.vectorStores.files.list('vs_abc123');

for (const file of page.data) {
  console.log(`${file.id}: ${file.status} (${file.usage_bytes} bytes)`);
}

// Filter by status
const completed = await client.vectorStores.files.list('vs_abc123', {
  filter: 'completed',
});

const failed = await client.vectorStores.files.list('vs_abc123', {
  filter: 'failed',
});

// Sort and paginate
const sorted = await client.vectorStores.files.list('vs_abc123', {
  order: 'desc',
  limit: 10,
});

// Iterate all pages
for await (const file of await client.vectorStores.files.list('vs_abc123')) {
  console.log(file.id);
}

Remove File from Vector Store

Deletes a file association from the vector store (file remains in system).

function del(
  vectorStoreId: string,
  fileId: string
): Promise<VectorStoreFileDeleted>;

Example:

const deleted = await client.vectorStores.files.del('vs_abc123', 'file-id-1');

console.log(deleted.deleted); // true

Note: Use client.files.delete() to permanently delete the file itself.


Get File Content

Retrieves parsed text content from a file in chunks.

function content(
  vectorStoreId: string,
  fileId: string
): Promise<FileContentResponsesPage>;

interface FileContentResponse {
  type?: string;  // 'text'
  text?: string;  // Content chunk
}

Example:

// Get file contents as paginated chunks
const content = await client.vectorStores.files.content('vs_abc123', 'file-id-1');

for (const chunk of content.data) {
  console.log(chunk.text);
}

// Iterate all chunks
for await (const chunk of await client.vectorStores.files.content('vs_abc123', 'file-id-1')) {
  console.log(chunk.text);
}

Helper Methods

Upload File Helper

Uploads a raw file to the Files API and adds it to the vector store in one operation.

async function upload(
  storeID: string,
  file: Uploadable,
  options?: RequestOptions
): Promise<VectorStoreFile>;

type Uploadable = File | Blob | Buffer | ReadStream | string;

Example:

import { toFile } from 'openai';
import fs from 'fs';

// From file path
const file = await client.vectorStores.files.upload('vs_abc123',
  await toFile(fs.createReadStream('./docs.pdf'), 'docs.pdf', { type: 'application/pdf' })
);

// From Buffer
const buffer = Buffer.from('PDF content here');
const vsFile = await client.vectorStores.files.upload('vs_abc123',
  await toFile(buffer, 'document.pdf', { type: 'application/pdf' })
);

// From File object (browser)
const input = document.querySelector('input[type="file"]');
const vsFile = await client.vectorStores.files.upload('vs_abc123', input.files[0]);

Note: File will start processing asynchronously. Use poll() to wait for completion.


Upload and Poll Helper

Uploads a file and waits for processing to complete.

async function uploadAndPoll(
  storeID: string,
  file: Uploadable,
  options?: RequestOptions & { pollIntervalMs?: number }
): Promise<VectorStoreFile>;

Example:

import { toFile } from 'openai';

// Upload and wait for processing
const completed = await client.vectorStores.files.uploadAndPoll(
  'vs_abc123',
  await toFile(Buffer.from('content'), 'doc.txt'),
  { pollIntervalMs: 2000 } // Check every 2 seconds
);

console.log(completed.status); // 'completed' or 'failed'

if (completed.status === 'failed') {
  console.error(completed.last_error?.message);
}

Create and Poll Helper

Attaches a file to vector store and waits for processing without uploading.

async function createAndPoll(
  storeID: string,
  body: FileCreateParams,
  options?: RequestOptions & { pollIntervalMs?: number }
): Promise<VectorStoreFile>;

Example:

// Attach pre-uploaded file and wait for processing
const completed = await client.vectorStores.files.createAndPoll(
  'vs_abc123',
  {
    file_id: 'file-abc123',
    chunking_strategy: {
      type: 'static',
      static: {
        max_chunk_size_tokens: 1024,
        chunk_overlap_tokens: 300,
      },
    },
  },
  { pollIntervalMs: 3000 }
);

Poll File Status

Manually polls a file until processing completes or fails.

async function poll(
  storeID: string,
  fileID: string,
  options?: RequestOptions & { pollIntervalMs?: number }
): Promise<VectorStoreFile>;

Example:

// Start processing
const file = await client.vectorStores.files.create('vs_abc123', {
  file_id: 'file-id-1',
});

// Poll manually
const completed = await client.vectorStores.files.poll('vs_abc123', file.id, {
  pollIntervalMs: 2000,
});

// Or in a loop with custom logic
while (true) {
  const status = await client.vectorStores.files.retrieve(file.id, {
    vector_store_id: 'vs_abc123',
  });

  if (status.status === 'completed') {
    console.log('Done!');
    break;
  } else if (status.status === 'failed') {
    console.error(status.last_error?.message);
    break;
  }

  await new Promise(resolve => setTimeout(resolve, 5000));
}

Batch File Operations

Process multiple files efficiently with batch operations.

Access via: client.vectorStores.fileBatches

Create Batch

Creates a batch of file operations for a vector store.

function create(
  storeID: string,
  params: FileBatchCreateParams
): Promise<VectorStoreFileBatch>;

interface FileBatchCreateParams {
  file_ids?: Array<string>;
  files?: Array<{
    file_id: string;
    attributes?: Record<string, string | number | boolean> | null;
    chunking_strategy?: FileChunkingStrategyParam;
  }>;
  attributes?: Record<string, string | number | boolean> | null;
  chunking_strategy?: FileChunkingStrategyParam;
}

Example:

// Batch with file IDs (same settings for all)
const batch = await client.vectorStores.fileBatches.create('vs_abc123', {
  file_ids: ['file-1', 'file-2', 'file-3'],
  chunking_strategy: {
    type: 'static',
    static: {
      max_chunk_size_tokens: 800,
      chunk_overlap_tokens: 400,
    },
  },
});

// Batch with per-file configuration
const customBatch = await client.vectorStores.fileBatches.create('vs_abc123', {
  files: [
    {
      file_id: 'file-1',
      chunking_strategy: { type: 'auto' },
    },
    {
      file_id: 'file-2',
      chunking_strategy: {
        type: 'static',
        static: { max_chunk_size_tokens: 512, chunk_overlap_tokens: 200 },
      },
      attributes: { category: 'support' },
    },
  ],
});

console.log(batch.status); // 'in_progress' | 'completed' | 'failed' | 'cancelled'
console.log(batch.file_counts);
// { total: 3, completed: 0, failed: 0, in_progress: 3, cancelled: 0 }

Retrieve Batch

Gets status and details of a batch operation.

function retrieve(
  batchID: string,
  params: FileBatchRetrieveParams
): Promise<VectorStoreFileBatch>;

interface FileBatchRetrieveParams {
  vector_store_id: string;
}

Example:

const batch = await client.vectorStores.fileBatches.retrieve('batch-id-1', {
  vector_store_id: 'vs_abc123',
});

console.log(batch.file_counts);
// { total: 5, completed: 3, failed: 0, in_progress: 2, cancelled: 0 }

Cancel Batch

Stops a batch operation and cancels remaining file processing.

function cancel(
  batchID: string,
  params: FileBatchCancelParams
): Promise<VectorStoreFileBatch>;

interface FileBatchCancelParams {
  vector_store_id: string;
}

Example:

const cancelled = await client.vectorStores.fileBatches.cancel('batch-id-1', {
  vector_store_id: 'vs_abc123',
});

console.log(cancelled.status); // 'cancelled'

List Files in Batch

Retrieves paginated list of files processed in a batch.

function listFiles(
  batchID: string,
  params: FileBatchListFilesParams
): Promise<VectorStoreFilesPage>;

interface FileBatchListFilesParams {
  vector_store_id: string;
  after?: string;
  before?: string;
  limit?: number;
  filter?: 'in_progress' | 'completed' | 'failed' | 'cancelled';
  order?: 'asc' | 'desc';
}

Example:

// List all files in batch
const page = await client.vectorStores.fileBatches.listFiles('batch-id-1', {
  vector_store_id: 'vs_abc123',
});

// Filter by status
const failed = await client.vectorStores.fileBatches.listFiles('batch-id-1', {
  vector_store_id: 'vs_abc123',
  filter: 'failed',
});

for (const file of failed.data) {
  console.log(file.last_error?.message);
}

Batch Helper Methods

Create and Poll

Creates a batch and waits for all files to finish processing.

async function createAndPoll(
  storeID: string,
  body: FileBatchCreateParams,
  options?: RequestOptions & { pollIntervalMs?: number }
): Promise<VectorStoreFileBatch>;

Example:

// Create batch and wait for all files
const completed = await client.vectorStores.fileBatches.createAndPoll(
  'vs_abc123',
  {
    file_ids: ['file-1', 'file-2', 'file-3'],
  },
  { pollIntervalMs: 5000 }
);

console.log(completed.file_counts);
// { total: 3, completed: 3, failed: 0, in_progress: 0, cancelled: 0 }

Upload and Poll

Uploads raw files and creates a batch, waiting for processing.

async function uploadAndPoll(
  storeID: string,
  { files: Uploadable[], fileIds?: string[] },
  options?: RequestOptions & {
    pollIntervalMs?: number;
    maxConcurrency?: number;
  }
): Promise<VectorStoreFileBatch>;

Example:

import { toFile } from 'openai';
import fs from 'fs';

// Upload multiple files concurrently and create batch
const batch = await client.vectorStores.fileBatches.uploadAndPoll(
  'vs_abc123',
  {
    files: [
      await toFile(fs.createReadStream('./doc1.pdf'), 'doc1.pdf'),
      await toFile(fs.createReadStream('./doc2.pdf'), 'doc2.pdf'),
      await toFile(fs.createReadStream('./doc3.txt'), 'doc3.txt'),
    ],
    fileIds: ['pre-uploaded-file-id'],
    maxConcurrency: 3, // Upload 3 files at a time
  },
  { pollIntervalMs: 5000 }
);

if (batch.file_counts.failed > 0) {
  const failed = await client.vectorStores.fileBatches.listFiles(batch.id, {
    vector_store_id: 'vs_abc123',
    filter: 'failed',
  });
}

Poll Batch Status

Manually polls a batch until processing completes.

async function poll(
  storeID: string,
  batchID: string,
  options?: RequestOptions & { pollIntervalMs?: number }
): Promise<VectorStoreFileBatch>;

Example:

const batch = await client.vectorStores.fileBatches.create('vs_abc123', {
  file_ids: ['file-1', 'file-2'],
});

// Poll for completion
const completed = await client.vectorStores.fileBatches.poll(
  'vs_abc123',
  batch.id,
  { pollIntervalMs: 3000 }
);

console.log(`Complete: ${completed.file_counts.completed}/${completed.file_counts.total}`);

Type Reference

VectorStore { .api }

interface VectorStore {
  id: string;                    // Unique identifier
  object: 'vector_store';
  created_at: number;            // Unix timestamp
  status: 'expired' | 'in_progress' | 'completed';
  usage_bytes: number;           // Total storage used
  last_active_at: number | null; // Unix timestamp or null
  name: string;                  // Display name
  metadata: Record<string, string> | null;

  file_counts: {
    total: number;       // All files
    completed: number;   // Fully processed
    failed: number;      // Processing failed
    in_progress: number; // Being processed
    cancelled: number;   // Cancelled
  };

  expires_after?: {
    anchor: 'last_active_at';
    days: number;
  };

  expires_at?: number | null;    // Unix timestamp when expired
}

VectorStoreFile { .api }

interface VectorStoreFile {
  id: string;                    // Unique identifier
  object: 'vector_store.file';
  vector_store_id: string;       // Parent store ID
  status: 'in_progress' | 'completed' | 'failed' | 'cancelled';
  created_at: number;            // Unix timestamp
  usage_bytes: number;           // Storage used

  last_error?: {
    code: 'server_error' | 'unsupported_file' | 'invalid_file';
    message: string;
  } | null;

  attributes?: Record<string, string | number | boolean> | null;
  chunking_strategy?: FileChunkingStrategy;
}

VectorStoreFileBatch { .api }

interface VectorStoreFileBatch {
  id: string;                    // Unique identifier
  object: 'vector_store.files_batch';
  vector_store_id: string;       // Parent store ID
  status: 'in_progress' | 'completed' | 'failed' | 'cancelled';
  created_at: number;            // Unix timestamp

  file_counts: {
    total: number;
    completed: number;
    failed: number;
    in_progress: number;
    cancelled: number;
  };
}

VectorStoreSearchResponse { .api }

interface VectorStoreSearchResponse {
  file_id: string;           // Source file ID
  filename: string;          // File name
  score: number;             // Similarity score

  content: Array<{
    type: 'text';
    text: string;
  }>;

  attributes?: Record<string, string | number | boolean> | null;
}

Chunking Strategies

Auto Chunking { .api }

Default strategy with smart chunk sizing:

interface AutoFileChunkingStrategyParam {
  type: 'auto';
}

Uses 800 tokens per chunk with 400 token overlap automatically.


Static Chunking { .api }

Customize chunk size and overlap:

interface StaticFileChunkingStrategy {
  max_chunk_size_tokens: number; // 100-4096, default 800
  chunk_overlap_tokens: number;  // Default 400, must be <= half of max
}

type StaticFileChunkingStrategyParam = {
  type: 'static';
  static: StaticFileChunkingStrategy;
};

Example:

// Large chunks for long documents
const largeChunks = {
  type: 'static',
  static: {
    max_chunk_size_tokens: 2048,
    chunk_overlap_tokens: 512,
  },
};

// Small chunks for precise retrieval
const smallChunks = {
  type: 'static',
  static: {
    max_chunk_size_tokens: 256,
    chunk_overlap_tokens: 64,
  },
};

Guidelines:

  • max_chunk_size_tokens: Controls semantic unit size. Larger = more context, slower retrieval
  • chunk_overlap_tokens: Prevents losing context at chunk boundaries
  • Overlap should be 25-50% of max size
  • 800 tokens ≈ 600 words for typical English text

Other/Unknown Chunking Strategy { .api }

This strategy type is returned for files indexed before the chunking_strategy concept was introduced. It indicates the chunking method is unknown.

interface OtherFileChunkingStrategyObject {
  /** Always 'other' */
  type: 'other';
}

This type appears in the FileChunkingStrategy union (response type) but not in FileChunkingStrategyParam (input type). You cannot create files with type: 'other' - it only appears in responses for legacy files.

type FileChunkingStrategy = StaticFileChunkingStrategyObject | OtherFileChunkingStrategyObject;

Complete Example: Building a Vector Store

import { OpenAI, toFile } from 'openai';
import fs from 'fs';

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

async function buildSupportDocs() {
  // 1. Create vector store
  const store = await client.vectorStores.create({
    name: 'Support Documentation',
    description: 'Company support and FAQ documentation',
    metadata: {
      department: 'support',
      version: '1.0',
    },
  });

  console.log(`Created store: ${store.id}`);

  // 2. Upload files with batch operation
  const batch = await client.vectorStores.fileBatches.uploadAndPoll(
    store.id,
    {
      files: [
        await toFile(fs.createReadStream('./docs/faq.pdf'), 'faq.pdf'),
        await toFile(fs.createReadStream('./docs/api.pdf'), 'api.pdf'),
        await toFile(fs.createReadStream('./docs/billing.txt'), 'billing.txt'),
      ],
      maxConcurrency: 3,
    },
    { pollIntervalMs: 2000 }
  );

  console.log(`Batch status: ${batch.status}`);
  console.log(`Files: ${batch.file_counts.completed}/${batch.file_counts.total} processed`);

  if (batch.file_counts.failed > 0) {
    const failed = await client.vectorStores.fileBatches.listFiles(batch.id, {
      vector_store_id: store.id,
      filter: 'failed',
    });

    for (const file of failed.data) {
      console.error(`Failed: ${file.id} - ${file.last_error?.message}`);
    }
  }

  // 3. Search the vector store
  const results = await client.vectorStores.search(store.id, {
    query: 'How do I reset my password?',
    max_num_results: 3,
  });

  console.log('Search results:');
  for (const result of results.data) {
    console.log(`- ${result.filename} (score: ${result.score})`);
    for (const chunk of result.content) {
      console.log(`  ${chunk.text.substring(0, 100)}...`);
    }
  }

  // 4. Use with assistant
  const assistant = await client.beta.assistants.create({
    name: 'Support Bot',
    model: 'gpt-4-turbo',
    tools: [{ type: 'file_search' }],
    tool_resources: {
      file_search: {
        vector_store_ids: [store.id],
      },
    },
  });

  console.log(`Created assistant: ${assistant.id}`);

  return { store, assistant, batch };
}

buildSupportDocs().catch(console.error);

Error Handling

import { APIError, NotFoundError, RateLimitError } from 'openai';

try {
  const file = await client.vectorStores.files.retrieve('file-id', {
    vector_store_id: 'vs-id',
  });
} catch (error) {
  if (error instanceof NotFoundError) {
    console.error('File not found');
  } else if (error instanceof RateLimitError) {
    console.error('Rate limited - retry in a moment');
  } else if (error instanceof APIError) {
    console.error(`API Error: ${error.status} ${error.message}`);
  }
}

Best Practices

  1. Batch Operations: Use batch operations for multiple files to handle concurrency efficiently
  2. Polling: Always poll until in_progress status completes before searching
  3. Chunking: Adjust chunk size based on your content - smaller for precise retrieval, larger for context
  4. Metadata: Use attributes to organize and filter files
  5. Error Handling: Check last_error after processing completes
  6. Expiration: Set expiration policies for temporary data stores
  7. Search Options: Use rewrite_query for better semantic search and filtering for precision

Install with Tessl CLI

npx tessl i tessl/npm-openai

docs

assistants.md

audio.md

batches-evals.md

chat-completions.md

client-configuration.md

containers.md

conversations.md

embeddings.md

files-uploads.md

fine-tuning.md

helpers-audio.md

helpers-zod.md

images.md

index.md

realtime.md

responses-api.md

vector-stores.md

videos.md

tile.json