Text generation and conversation capabilities using Mistral's chat models. Supports streaming, structured outputs, tool calling, and various model configurations.
All supported Mistral chat model identifiers organized by category.
type MistralChatModelId =
// Premier models
| 'ministral-3b-latest'
| 'ministral-8b-latest'
| 'mistral-large-latest'
| 'mistral-medium-latest'
| 'mistral-medium-2508'
| 'mistral-medium-2505'
| 'mistral-small-latest'
| 'pixtral-large-latest'
// Reasoning models
| 'magistral-small-2507'
| 'magistral-medium-2507'
| 'magistral-small-2506'
| 'magistral-medium-2506'
// Free models
| 'pixtral-12b-2409'
// Legacy models
| 'open-mistral-7b'
| 'open-mixtral-8x7b'
| 'open-mixtral-8x22b'
// Custom models
| (string & {});Configuration options for Mistral chat models.
interface MistralLanguageModelOptions {
/**
* Whether to inject a safety prompt before all conversations.
* Defaults to `false`.
*/
safePrompt?: boolean;
/** Maximum number of images in document processing */
documentImageLimit?: number;
/** Maximum number of pages in document processing */
documentPageLimit?: number;
/**
* Whether to use structured outputs.
* @default true
*/
structuredOutputs?: boolean;
/**
* Whether to use strict JSON schema validation.
* @default false
*/
strictJsonSchema?: boolean;
}Create a chat language model instance for text generation.
// Via provider instance (multiple equivalent methods)
provider(modelId: MistralChatModelId): LanguageModelV2;
provider.languageModel(modelId: MistralChatModelId): LanguageModelV2;
provider.chat(modelId: MistralChatModelId): LanguageModelV2;Properties and interface of the MistralChatLanguageModel class.
class MistralChatLanguageModel implements LanguageModelV2 {
readonly specificationVersion: 'v2';
readonly modelId: MistralChatModelId;
readonly provider: string;
// Full LanguageModelV2 interface implementation
doGenerate(options: LanguageModelV2CallOptions): Promise<LanguageModelV2Result>;
doStream(options: LanguageModelV2CallOptions): Promise<LanguageModelV2StreamResult>;
}Usage Examples:
import { mistral } from '@ai-sdk/mistral';
import { generateText, streamText } from 'ai';
// Basic text generation
const { text } = await generateText({
model: mistral('mistral-large-latest'),
prompt: 'Explain quantum computing in simple terms.',
});
// Streaming text generation
const { textStream } = await streamText({
model: mistral('mistral-medium-latest'),
prompt: 'Write a story about a time-traveling scientist.',
});
for await (const delta of textStream) {
process.stdout.write(delta);
}
// With model options
const result = await generateText({
model: mistral('mistral-large-latest', {
safePrompt: true,
structuredOutputs: true,
}),
prompt: 'Generate a JSON object with user information',
});Premier Models:
mistral-large-latest: Best performance, most capable for complex tasksmistral-medium-latest: Balanced performance and costmistral-small-latest: Fast and efficient for simple taskspixtral-large-latest: Multimodal model supporting imagesReasoning Models:
magistral-small-2507 / magistral-medium-2507: Enhanced reasoning capabilitiesmagistral-small-2506 / magistral-medium-2506: Earlier reasoning modelsLegacy Models:
open-mistral-7b: Open source, good for experimentationopen-mixtral-8x7b: Mixture of experts, efficient scalingopen-mixtral-8x22b: Larger mixture of experts modelUse structured outputs for JSON generation and schema validation.
Usage Examples:
import { mistral } from '@ai-sdk/mistral';
import { generateObject } from 'ai';
import { z } from 'zod';
// Generate structured JSON
const { object } = await generateObject({
model: mistral('mistral-large-latest'),
schema: z.object({
name: z.string(),
age: z.number(),
email: z.string().email(),
}),
prompt: 'Generate a user profile for John Doe, age 30',
});
console.log(object); // { name: "John Doe", age: 30, email: "john@example.com" }Mistral models support tool calling for function execution.
Usage Examples:
import { mistral } from '@ai-sdk/mistral';
import { generateText } from 'ai';
const result = await generateText({
model: mistral('mistral-large-latest'),
tools: {
weather: {
description: 'Get the current weather in a city',
parameters: z.object({
city: z.string().describe('The city to get weather for'),
}),
execute: async ({ city }) => {
// Your weather API call here
return `The weather in ${city} is sunny, 22°C`;
},
},
},
prompt: 'What is the weather like in Paris?',
});Use the safePrompt option to enable Mistral's built-in safety filtering.
const result = await generateText({
model: mistral('mistral-large-latest', {
safePrompt: true, // Enables safety prompt injection
}),
prompt: 'Your potentially sensitive prompt here',
});Certain models like pixtral-large-latest support both text and image inputs.
import { generateText } from 'ai';
const result = await generateText({
model: mistral('pixtral-large-latest'),
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What do you see in this image?' },
{ type: 'image', image: 'data:image/jpeg;base64,...' },
],
},
],
});