or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

chat-models.mdembeddings.mdindex.mdprovider-configuration.md
tile.json

chat-models.mddocs/

Chat Models

Text generation and conversation capabilities using Mistral's chat models. Supports streaming, structured outputs, tool calling, and various model configurations.

Capabilities

Chat Model IDs

All supported Mistral chat model identifiers organized by category.

type MistralChatModelId = 
  // Premier models
  | 'ministral-3b-latest'
  | 'ministral-8b-latest'
  | 'mistral-large-latest'
  | 'mistral-medium-latest'
  | 'mistral-medium-2508'
  | 'mistral-medium-2505'
  | 'mistral-small-latest'
  | 'pixtral-large-latest'
  // Reasoning models
  | 'magistral-small-2507'
  | 'magistral-medium-2507'
  | 'magistral-small-2506'
  | 'magistral-medium-2506'
  // Free models
  | 'pixtral-12b-2409'
  // Legacy models
  | 'open-mistral-7b'
  | 'open-mixtral-8x7b'
  | 'open-mixtral-8x22b'
  // Custom models
  | (string & {});

Language Model Options

Configuration options for Mistral chat models.

interface MistralLanguageModelOptions {
  /** 
   * Whether to inject a safety prompt before all conversations.
   * Defaults to `false`.
   */
  safePrompt?: boolean;
  
  /** Maximum number of images in document processing */
  documentImageLimit?: number;
  
  /** Maximum number of pages in document processing */
  documentPageLimit?: number;
  
  /** 
   * Whether to use structured outputs.
   * @default true
   */
  structuredOutputs?: boolean;
  
  /** 
   * Whether to use strict JSON schema validation.
   * @default false
   */
  strictJsonSchema?: boolean;
}

Create Chat Model

Create a chat language model instance for text generation.

// Via provider instance (multiple equivalent methods)
provider(modelId: MistralChatModelId): LanguageModelV2;
provider.languageModel(modelId: MistralChatModelId): LanguageModelV2;
provider.chat(modelId: MistralChatModelId): LanguageModelV2;

Chat Language Model Implementation

Properties and interface of the MistralChatLanguageModel class.

class MistralChatLanguageModel implements LanguageModelV2 {
  readonly specificationVersion: 'v2';
  readonly modelId: MistralChatModelId;
  readonly provider: string;
  
  // Full LanguageModelV2 interface implementation
  doGenerate(options: LanguageModelV2CallOptions): Promise<LanguageModelV2Result>;
  doStream(options: LanguageModelV2CallOptions): Promise<LanguageModelV2StreamResult>;
}

Usage Examples:

import { mistral } from '@ai-sdk/mistral';
import { generateText, streamText } from 'ai';

// Basic text generation
const { text } = await generateText({
  model: mistral('mistral-large-latest'),
  prompt: 'Explain quantum computing in simple terms.',
});

// Streaming text generation
const { textStream } = await streamText({
  model: mistral('mistral-medium-latest'),
  prompt: 'Write a story about a time-traveling scientist.',
});

for await (const delta of textStream) {
  process.stdout.write(delta);
}

// With model options
const result = await generateText({
  model: mistral('mistral-large-latest', {
    safePrompt: true,
    structuredOutputs: true,
  }),
  prompt: 'Generate a JSON object with user information',
});

Model Recommendations

Premier Models:

  • mistral-large-latest: Best performance, most capable for complex tasks
  • mistral-medium-latest: Balanced performance and cost
  • mistral-small-latest: Fast and efficient for simple tasks
  • pixtral-large-latest: Multimodal model supporting images

Reasoning Models:

  • magistral-small-2507 / magistral-medium-2507: Enhanced reasoning capabilities
  • magistral-small-2506 / magistral-medium-2506: Earlier reasoning models

Legacy Models:

  • open-mistral-7b: Open source, good for experimentation
  • open-mixtral-8x7b: Mixture of experts, efficient scaling
  • open-mixtral-8x22b: Larger mixture of experts model

Structured Outputs

Use structured outputs for JSON generation and schema validation.

Usage Examples:

import { mistral } from '@ai-sdk/mistral';
import { generateObject } from 'ai';
import { z } from 'zod';

// Generate structured JSON
const { object } = await generateObject({
  model: mistral('mistral-large-latest'),
  schema: z.object({
    name: z.string(),
    age: z.number(),
    email: z.string().email(),
  }),
  prompt: 'Generate a user profile for John Doe, age 30',
});

console.log(object); // { name: "John Doe", age: 30, email: "john@example.com" }

Tool Calling

Mistral models support tool calling for function execution.

Usage Examples:

import { mistral } from '@ai-sdk/mistral';
import { generateText } from 'ai';

const result = await generateText({
  model: mistral('mistral-large-latest'),
  tools: {
    weather: {
      description: 'Get the current weather in a city',
      parameters: z.object({
        city: z.string().describe('The city to get weather for'),
      }),
      execute: async ({ city }) => {
        // Your weather API call here
        return `The weather in ${city} is sunny, 22°C`;
      },
    },
  },
  prompt: 'What is the weather like in Paris?',
});

Safety and Content Filtering

Use the safePrompt option to enable Mistral's built-in safety filtering.

const result = await generateText({
  model: mistral('mistral-large-latest', {
    safePrompt: true, // Enables safety prompt injection
  }),
  prompt: 'Your potentially sensitive prompt here',
});

Multimodal Support

Certain models like pixtral-large-latest support both text and image inputs.

import { generateText } from 'ai';

const result = await generateText({
  model: mistral('pixtral-large-latest'),
  messages: [
    {
      role: 'user',
      content: [
        { type: 'text', text: 'What do you see in this image?' },
        { type: 'image', image: 'data:image/jpeg;base64,...' },
      ],
    },
  ],
});