or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

configuration.mdembeddings.mdimage-generation.mdindex.mdlanguage-models.mdtranscription.md
tile.json

tessl/npm-ai-sdk--azure

Azure OpenAI provider for the AI SDK - provides language model support for Azure OpenAI API integration within the AI SDK ecosystem

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
npmpkg:npm/@ai-sdk/azure@1.3.x

To install, run

npx @tessl/cli install tessl/npm-ai-sdk--azure@1.3.0

index.mddocs/

AI SDK Azure Provider

The Azure OpenAI provider for the AI SDK enables integration with Azure's OpenAI services, providing a unified TypeScript interface for language models, embeddings, image generation, and audio transcription. It abstracts Azure OpenAI API interactions while maintaining compatibility with the broader AI SDK ecosystem.

Package Information

  • Package Name: @ai-sdk/azure
  • Package Type: npm
  • Language: TypeScript
  • Installation: npm install @ai-sdk/azure

Core Imports

import { azure, createAzure } from "@ai-sdk/azure";
import type { AzureOpenAIProvider, AzureOpenAIProviderSettings } from "@ai-sdk/azure";

For CommonJS:

const { azure, createAzure } = require("@ai-sdk/azure");

Basic Usage

import { azure } from "@ai-sdk/azure";
import { generateText } from "ai";

// Using the default provider instance
const { text } = await generateText({
  model: azure("gpt-4o"), // your deployment name
  prompt: "Write a vegetarian lasagna recipe for 4 people.",
});

// Creating a custom provider with specific configuration
import { createAzure } from "@ai-sdk/azure";

const customAzure = createAzure({
  resourceName: "my-azure-resource",
  apiKey: "your-api-key",
  apiVersion: "2025-03-01-preview",
});

const model = customAzure("my-deployment");

Architecture

The Azure provider is built around several key components:

  • Provider Factory: createAzure() function creates customizable provider instances with Azure-specific configuration
  • Default Instance: azure provides a pre-configured provider using environment variables for quick setup
  • Model Creation: The provider instance offers methods for creating different types of AI models (chat, completion, embedding, image, transcription)
  • Azure Integration: Built-in support for Azure OpenAI resource names, custom base URLs, and API versioning
  • Type Safety: Full TypeScript integration with comprehensive type definitions for all models and settings

Capabilities

Language Models

Core language model functionality for text generation using Azure OpenAI's chat and completion models. Supports both modern chat-based and legacy completion-based text generation.

function azure(deploymentId: string, settings?: OpenAIChatSettings): LanguageModelV1;

interface AzureOpenAIProvider {
  (deploymentId: string, settings?: OpenAIChatSettings): LanguageModelV1;
  languageModel(deploymentId: string, settings?: OpenAIChatSettings): LanguageModelV1;
  chat(deploymentId: string, settings?: OpenAIChatSettings): LanguageModelV1;
  completion(deploymentId: string, settings?: OpenAICompletionSettings): LanguageModelV1;
  responses(deploymentId: string): LanguageModelV1;
}

Language Models

Text Embeddings

Text embedding functionality for semantic search and similarity comparisons using Azure OpenAI embedding models.

// Current method
textEmbeddingModel(deploymentId: string, settings?: OpenAIEmbeddingSettings): EmbeddingModelV1<string>;

// Deprecated methods (use textEmbeddingModel instead)
embedding(deploymentId: string, settings?: OpenAIEmbeddingSettings): EmbeddingModelV1<string>;
textEmbedding(deploymentId: string, settings?: OpenAIEmbeddingSettings): EmbeddingModelV1<string>;

Text Embeddings

Image Generation

Image generation functionality using Azure OpenAI's DALL-E models for creating images from text descriptions.

// Current method
imageModel(deploymentId: string, settings?: OpenAIImageSettings): ImageModelV1;

// Deprecated method (use imageModel instead)
image(deploymentId: string, settings?: OpenAIImageSettings): ImageModelV1;

Image Generation

Audio Transcription

Audio transcription functionality for converting speech to text using Azure OpenAI's transcription models.

transcription(deploymentId: string): TranscriptionModelV1;

Audio Transcription

Provider Configuration

Configuration options for creating and customizing Azure OpenAI provider instances.

function createAzure(options?: AzureOpenAIProviderSettings): AzureOpenAIProvider;

interface AzureOpenAIProviderSettings {
  resourceName?: string;
  baseURL?: string;
  apiKey?: string;
  headers?: Record<string, string>;
  fetch?: FetchFunction;
  apiVersion?: string;
}

Provider Configuration

Types

interface AzureOpenAIProvider extends ProviderV1 {
  (deploymentId: string, settings?: OpenAIChatSettings): LanguageModelV1;
  languageModel(deploymentId: string, settings?: OpenAIChatSettings): LanguageModelV1;
  chat(deploymentId: string, settings?: OpenAIChatSettings): LanguageModelV1;
  completion(deploymentId: string, settings?: OpenAICompletionSettings): LanguageModelV1;
  responses(deploymentId: string): LanguageModelV1;
  embedding(deploymentId: string, settings?: OpenAIEmbeddingSettings): EmbeddingModelV1<string>;
  textEmbedding(deploymentId: string, settings?: OpenAIEmbeddingSettings): EmbeddingModelV1<string>;
  textEmbeddingModel(deploymentId: string, settings?: OpenAIEmbeddingSettings): EmbeddingModelV1<string>;
  image(deploymentId: string, settings?: OpenAIImageSettings): ImageModelV1;
  imageModel(deploymentId: string, settings?: OpenAIImageSettings): ImageModelV1;
  transcription(deploymentId: string): TranscriptionModelV1;
}

interface AzureOpenAIProviderSettings {
  /** Name of the Azure OpenAI resource. Either this or `baseURL` can be used. */
  resourceName?: string;
  /** Use a different URL prefix for API calls, e.g. to use proxy servers. Either this or `resourceName` can be used. */
  baseURL?: string;
  /** API key for authenticating requests. */
  apiKey?: string;
  /** Custom headers to include in the requests. */
  headers?: Record<string, string>;
  /** Custom fetch implementation. You can use it as a middleware to intercept requests. */
  fetch?: FetchFunction;
  /** Custom API version to use. Defaults to `2025-03-01-preview`. */
  apiVersion?: string;
}

interface OpenAIChatSettings {
  /** Modify the likelihood of specified tokens appearing in the completion. */
  logitBias?: Record<number, number>;
  /** Return the log probabilities of the tokens. Setting to true returns log probabilities of generated tokens. Setting to a number returns top n tokens. */
  logprobs?: boolean | number;
  /** Whether to enable parallel function calling during tool use. Default to true. */
  parallelToolCalls?: boolean;
  /** Whether to use structured outputs. Defaults to false. */
  structuredOutputs?: boolean;
  /** Whether to use legacy function calling. Defaults to false. */
  useLegacyFunctionCalling?: boolean;
  /** A unique identifier representing your end-user. */
  user?: string;
  /** Automatically download images and pass the image as data to the model. Defaults to `false`. */
  downloadImages?: boolean;
  /** Simulates streaming by using a normal generate call and returning it as a stream. Defaults to `false`. */
  simulateStreaming?: boolean;
  /** Reasoning effort for reasoning models. Defaults to `medium`. */
  reasoningEffort?: 'low' | 'medium' | 'high';
}

interface OpenAICompletionSettings {
  /** Echo back the prompt in addition to the completion. */
  echo?: boolean;
  /** Modify the likelihood of specified tokens appearing in the completion. */
  logitBias?: Record<number, number>;
  /** Return the log probabilities of the tokens. */
  logprobs?: boolean | number;
  /** The suffix that comes after a completion of inserted text. */
  suffix?: string;
  /** A unique identifier representing your end-user. */
  user?: string;
}

interface OpenAIEmbeddingSettings {
  /** Override the maximum number of embeddings per call. */
  maxEmbeddingsPerCall?: number;
  /** Override the parallelism of embedding calls. */
  supportsParallelCalls?: boolean;
  /** The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models. */
  dimensions?: number;
  /** A unique identifier representing your end-user. */
  user?: string;
}

interface OpenAIImageSettings {
  /** Override the maximum number of images per call (default is dependent on the model). */
  maxImagesPerCall?: number;
}

// AI SDK Base Types
interface LanguageModelV1 {
  /** The language model provider. */
  readonly provider: string;
  /** The model name of the language model. */
  readonly modelName: string;
  /** Generate text and call tools for a prompt. */
  doGenerate(options: any): Promise<any>;
  /** Generate a stream of text and tool calls for a prompt. */
  doStream(options: any): Promise<any>;
}

interface EmbeddingModelV1<VALUE> {
  /** The embedding model provider. */
  readonly provider: string;
  /** The embedding model name. */
  readonly modelName: string;
  /** The maximum number of values that can be embedded in a single call. */
  readonly maxEmbeddingsPerCall?: number;
  /** Whether the model supports parallel embedding calls. */
  readonly supportsParallelCalls?: boolean;
  /** Generate embeddings for the given values. */
  doEmbed(options: any): Promise<any>;
}

interface ImageModelV1 {
  /** The image model provider. */
  readonly provider: string;
  /** The image model name. */
  readonly modelName: string;
  /** Generate images for the given prompt. */
  doGenerate(options: any): Promise<any>;
}

interface TranscriptionModelV1 {
  /** The transcription model provider. */
  readonly provider: string;
  /** The transcription model name. */
  readonly modelName: string;
  /** Transcribe audio for the given audio data. */
  doTranscribe(options: any): Promise<any>;
}

interface ProviderV1 {
  /** The provider identifier. */
  readonly providerId: string;
}

type FetchFunction = typeof globalThis.fetch;