or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

client.mddatasets.mdexperiments.mdindex.mdprompts.mdsdk-integration.mdspans.md

prompts.mddocs/

0

# Prompt Management

1

2

Comprehensive prompt management system with version control, tagging, and multi-provider support. Enables creation, retrieval, and management of prompt templates with format conversion for popular LLM SDKs.

3

4

## Capabilities

5

6

### Prompt Retrieval

7

8

Retrieve prompts using various selection methods including ID, version ID, name, or name with tag.

9

10

```typescript { .api }

11

/**

12

* Get a prompt from the Phoenix API using various selection methods

13

* @param params - Parameters including client and prompt selector

14

* @returns Promise resolving to prompt version or null if not found

15

*/

16

function getPrompt(params: {

17

client?: PhoenixClient;

18

prompt: PromptSelector;

19

}): Promise<PromptVersion | null>;

20

21

type PromptSelector =

22

| { promptId: string }

23

| { versionId: string }

24

| { name: string }

25

| { name: string; tag: string };

26

27

interface PromptVersion {

28

id: string;

29

prompt_id: string;

30

version: number;

31

description: string;

32

model_name: string;

33

model_provider: PromptModelProvider;

34

template_type: "CHAT";

35

template_format: "MUSTACHE" | "F_STRING" | "JINJA2";

36

template: PromptTemplate;

37

invocation_parameters: InvocationParameters;

38

created_at: string;

39

}

40

41

interface PromptTemplate {

42

type: "chat";

43

messages: PromptChatMessage[];

44

}

45

46

interface PromptChatMessage {

47

role: PromptChatMessageRole;

48

content: string;

49

}

50

51

type PromptChatMessageRole = "user" | "system" | "ai" | "tool";

52

53

type PromptModelProvider =

54

| "OPENAI"

55

| "AZURE_OPENAI"

56

| "ANTHROPIC"

57

| "GOOGLE"

58

| "DEEPSEEK"

59

| "XAI"

60

| "OLLAMA"

61

| "AWS";

62

```

63

64

**Usage Examples:**

65

66

```typescript

67

import { getPrompt } from "@arizeai/phoenix-client/prompts";

68

69

// Get by prompt ID

70

const prompt = await getPrompt({

71

prompt: { promptId: "prompt_123" }

72

});

73

74

// Get by name

75

const prompt = await getPrompt({

76

prompt: { name: "customer-support-chat" }

77

});

78

79

// Get by name and tag

80

const prompt = await getPrompt({

81

prompt: { name: "customer-support-chat", tag: "production" }

82

});

83

84

// Get by version ID

85

const prompt = await getPrompt({

86

prompt: { versionId: "version_456" }

87

});

88

```

89

90

### Prompt Creation

91

92

Create new prompts or add versions to existing prompts with support for multiple model providers.

93

94

```typescript { .api }

95

/**

96

* Create a prompt and store it in Phoenix

97

* Creates new prompt or adds version to existing prompt with same name

98

* @param params - Prompt creation parameters

99

* @returns Promise resolving to created prompt version

100

*/

101

function createPrompt(params: {

102

client?: PhoenixClient;

103

name: string;

104

description?: string;

105

version: PromptVersionData;

106

}): Promise<PromptVersion>;

107

108

interface PromptVersionData {

109

description: string;

110

model_provider: PromptModelProvider;

111

model_name: string;

112

template_type: "CHAT";

113

template_format: "MUSTACHE" | "F_STRING" | "JINJA2";

114

template: PromptTemplate;

115

invocation_parameters: InvocationParameters;

116

}

117

118

type InvocationParameters =

119

| { type: "openai"; openai: OpenAIInvocationParameters }

120

| { type: "azure_openai"; azure_openai: AzureOpenAIInvocationParameters }

121

| { type: "anthropic"; anthropic: AnthropicInvocationParameters }

122

| { type: "google"; google: GoogleInvocationParameters }

123

| { type: "deepseek"; deepseek: DeepSeekInvocationParameters }

124

| { type: "xai"; xai: XAIInvocationParameters }

125

| { type: "ollama"; ollama: OllamaInvocationParameters }

126

| { type: "aws"; aws: AwsInvocationParameters };

127

128

interface OpenAIInvocationParameters {

129

temperature?: number;

130

max_tokens?: number;

131

top_p?: number;

132

frequency_penalty?: number;

133

presence_penalty?: number;

134

response_format?: { type: "text" | "json_object" };

135

seed?: number;

136

stop?: string | string[];

137

tools?: any[];

138

tool_choice?: string | object;

139

}

140

141

interface AnthropicInvocationParameters {

142

max_tokens: number; // Required for Anthropic

143

temperature?: number;

144

top_p?: number;

145

top_k?: number;

146

stop_sequences?: string[];

147

system?: string;

148

tools?: any[];

149

tool_choice?: object;

150

}

151

```

152

153

**Usage Example:**

154

155

```typescript

156

import { createPrompt, promptVersion } from "@arizeai/phoenix-client/prompts";

157

158

const prompt = await createPrompt({

159

name: "customer-support-chat",

160

description: "Customer support chatbot prompt",

161

version: promptVersion({

162

modelProvider: "OPENAI",

163

modelName: "gpt-4o",

164

template: [

165

{

166

role: "system",

167

content: "You are a helpful customer support agent. Be friendly and professional."

168

},

169

{

170

role: "user",

171

content: "{{user_message}}"

172

}

173

],

174

invocationParameters: {

175

temperature: 0.7,

176

max_tokens: 500

177

}

178

})

179

});

180

```

181

182

### Prompt Version Helper

183

184

Utility function to construct prompt version data declaratively for different model providers.

185

186

```typescript { .api }

187

/**

188

* Helper function to construct prompt version data declaratively

189

* @param params - Model provider-specific input parameters

190

* @returns Structured prompt version data ready for creation

191

*/

192

function promptVersion(params: PromptVersionInput): PromptVersionData;

193

194

type PromptVersionInput =

195

| OpenAIPromptVersionInput

196

| AzureOpenAIPromptVersionInput

197

| AnthropicPromptVersionInput

198

| GooglePromptVersionInput

199

| DeepSeekPromptVersionInput

200

| XAIPromptVersionInput

201

| OllamaPromptVersionInput

202

| AwsPromptVersionInput;

203

204

interface OpenAIPromptVersionInput {

205

modelProvider: "OPENAI";

206

modelName: string;

207

template: PromptChatMessage[];

208

description?: string;

209

templateFormat?: "MUSTACHE" | "F_STRING" | "JINJA2";

210

invocationParameters?: OpenAIInvocationParameters;

211

}

212

213

interface AnthropicPromptVersionInput {

214

modelProvider: "ANTHROPIC";

215

modelName: string;

216

template: PromptChatMessage[];

217

description?: string;

218

templateFormat?: "MUSTACHE" | "F_STRING" | "JINJA2";

219

invocationParameters: AnthropicInvocationParameters; // Required

220

}

221

```

222

223

**Usage Examples:**

224

225

```typescript

226

import { promptVersion } from "@arizeai/phoenix-client/prompts";

227

228

// OpenAI prompt version

229

const openaiVersion = promptVersion({

230

modelProvider: "OPENAI",

231

modelName: "gpt-4o",

232

description: "Production version",

233

template: [

234

{ role: "system", content: "You are an AI assistant." },

235

{ role: "user", content: "{{question}}" }

236

],

237

invocationParameters: {

238

temperature: 0.3,

239

max_tokens: 1000

240

}

241

});

242

243

// Anthropic prompt version

244

const anthropicVersion = promptVersion({

245

modelProvider: "ANTHROPIC",

246

modelName: "claude-3-5-sonnet-20241022",

247

template: [

248

{ role: "system", content: "You are Claude, an AI assistant." },

249

{ role: "user", content: "{{user_input}}" }

250

],

251

invocationParameters: {

252

max_tokens: 1000, // Required for Anthropic

253

temperature: 0.5

254

}

255

});

256

```

257

258

### Prompt Utility Functions

259

260

Internal utility functions for prompt processing and retrieval.

261

262

```typescript { .api }

263

/**

264

* Internal utility to get prompt by various selector types

265

* @param params - Client and prompt selector

266

* @returns Promise resolving to prompt version or null

267

*/

268

function getPromptBySelector(params: {

269

client: PhoenixClient;

270

prompt: PromptSelector;

271

}): Promise<PromptVersion | null>;

272

273

/**

274

* Format prompt messages with variable substitution

275

* @param params - Template messages and variable values

276

* @returns Formatted messages with variables substituted

277

*/

278

function formatPromptMessages(params: {

279

messages: PromptChatMessage[];

280

variables?: Record<string, string | number | boolean>;

281

templateFormat?: "MUSTACHE" | "F_STRING" | "JINJA2";

282

}): PromptChatMessage[];

283

```

284

285

### Model Provider Support

286

287

Complete support for major AI model providers with provider-specific configuration options.

288

289

**Supported Providers:**

290

291

- **OpenAI**: GPT models with full parameter support

292

- **Azure OpenAI**: Azure-hosted OpenAI models

293

- **Anthropic**: Claude models with required max_tokens

294

- **Google**: Gemini and other Google AI models

295

- **DeepSeek**: DeepSeek model family

296

- **XAI**: xAI Grok models

297

- **Ollama**: Local model deployment

298

- **AWS**: Amazon Bedrock models

299

300

**Provider-Specific Features:**

301

302

- Parameter validation per provider requirements

303

- Provider-specific invocation parameter schemas

304

- Automatic type checking for model names and parameters

305

- Format conversion utilities for seamless SDK integration