or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

chat-interface.mdclient-management.mddocument-prompt-template.mdembeddings.mdevaluation.mdexplanations.mdindex.mdprompt-construction.mdsteering.mdstructured-output.mdtext-completion.mdtokenization.mdtranslation.mdutilities.md

chat-interface.mddocs/

0

# Chat Interface

1

2

Conversational AI interface with message handling, streaming, and structured output support. Provides a natural conversational interface with support for system messages, multimodal content, and real-time streaming responses.

3

4

## Capabilities

5

6

### Chat Requests

7

8

Configure conversational interactions with role-based messaging and response controls.

9

10

```python { .api }

11

class ChatRequest:

12

model: str

13

messages: Sequence[Union[Message, TextMessage]]

14

maximum_tokens: Optional[int] = None

15

temperature: Optional[float] = None

16

top_k: Optional[int] = None

17

top_p: Optional[float] = None

18

stream_options: Optional[StreamOptions] = None

19

steering_concepts: Optional[List[str]] = None

20

response_format: Optional[ResponseFormat] = None

21

"""

22

Request configuration for chat completion.

23

24

Attributes:

25

- model: Model name to use for chat

26

- messages: Conversation history as sequence of messages

27

- maximum_tokens: Maximum tokens to generate in response

28

- temperature: Sampling temperature for response generation

29

- top_k: Top-k sampling parameter

30

- top_p: Top-p/nucleus sampling parameter

31

- stream_options: Configuration for streaming responses

32

- steering_concepts: Concept IDs for content steering

33

- response_format: Structured output format specification

34

"""

35

36

def to_json(self) -> Mapping[str, Any]:

37

"""Serialize request to JSON format."""

38

```

39

40

### Message Types

41

42

Flexible message structures supporting both simple text and rich multimodal content.

43

44

```python { .api }

45

class Message:

46

role: Role

47

content: Union[str, List[Union[str, Image]]]

48

"""

49

Chat message with role and multimodal content.

50

51

Attributes:

52

- role: Message role (user, assistant, system)

53

- content: Text string or mixed list of text and images

54

"""

55

56

def to_json(self) -> Mapping[str, Any]:

57

"""Serialize message to JSON format."""

58

59

class TextMessage:

60

role: Role

61

content: str

62

"""

63

Text-only chat message.

64

65

Attributes:

66

- role: Message role (user, assistant, system)

67

- content: Text content only

68

"""

69

70

@staticmethod

71

def from_json(json: Dict[str, Any]) -> TextMessage:

72

"""Create TextMessage from JSON data."""

73

74

def to_json(self) -> Mapping[str, Any]:

75

"""Serialize message to JSON format."""

76

```

77

78

### Chat Responses

79

80

Response structure containing generated assistant messages with completion metadata.

81

82

```python { .api }

83

class ChatResponse:

84

finish_reason: FinishReason

85

message: TextMessage

86

"""

87

Response from chat completion.

88

89

Attributes:

90

- finish_reason: Why generation stopped

91

- message: Generated assistant response message

92

"""

93

94

@staticmethod

95

def from_json(json: Dict[str, Any]) -> ChatResponse:

96

"""Create response from JSON data."""

97

```

98

99

### Role System

100

101

Enumeration defining the different participant roles in a conversation.

102

103

```python { .api }

104

class Role(str, Enum):

105

User = "user" # User/human messages

106

Assistant = "assistant" # AI assistant responses

107

System = "system" # System instructions/context

108

```

109

110

### Completion Control

111

112

Enumerations and classes for controlling chat completion behavior.

113

114

```python { .api }

115

class FinishReason(str, Enum):

116

Stop = "stop" # Natural completion or stop sequence

117

Length = "length" # Maximum length reached

118

ContentFilter = "content_filter" # Content filtering triggered

119

120

class StreamOptions:

121

include_usage: bool

122

"""

123

Configuration for streaming responses.

124

125

Attributes:

126

- include_usage: Include token usage statistics in stream

127

"""

128

129

class Usage:

130

completion_tokens: int

131

prompt_tokens: int

132

total_tokens: int

133

"""

134

Token usage statistics.

135

136

Attributes:

137

- completion_tokens: Tokens used in generated response

138

- prompt_tokens: Tokens used in input messages

139

- total_tokens: Total tokens used (prompt + completion)

140

"""

141

```

142

143

### Synchronous Chat

144

145

Generate chat responses using the synchronous client.

146

147

```python { .api }

148

def chat(self, request: ChatRequest, model: str) -> ChatResponse:

149

"""

150

Generate chat response.

151

152

Parameters:

153

- request: Chat configuration with message history

154

- model: Model name to use for generation

155

156

Returns:

157

ChatResponse with assistant message

158

"""

159

```

160

161

### Asynchronous Chat with Streaming

162

163

Generate chat responses asynchronously with optional real-time streaming.

164

165

```python { .api }

166

async def chat(self, request: ChatRequest, model: str) -> ChatResponse:

167

"""

168

Generate chat response (async).

169

170

Parameters:

171

- request: Chat configuration with message history

172

- model: Model name to use for generation

173

174

Returns:

175

ChatResponse with assistant message

176

"""

177

178

async def chat_with_streaming(

179

self,

180

request: ChatRequest,

181

model: str

182

) -> AsyncGenerator[Union[ChatStreamChunk, Usage, FinishReason], None]:

183

"""

184

Generate chat response with streaming.

185

186

Parameters:

187

- request: Chat configuration with message history

188

- model: Model name to use for generation

189

190

Yields:

191

Stream chunks, usage stats, and finish reason

192

"""

193

```

194

195

### Streaming Components

196

197

Data structures for handling streaming chat responses.

198

199

```python { .api }

200

class ChatStreamChunk:

201

content: str

202

role: Optional[Role]

203

"""

204

Streaming chat response chunk.

205

206

Attributes:

207

- content: Partial response content

208

- role: Role (only present in first chunk)

209

"""

210

```

211

212

### Usage Examples

213

214

Basic chat interactions and advanced features:

215

216

```python

217

from aleph_alpha_client import (

218

Client, ChatRequest, Message, TextMessage, Role,

219

FinishReason, StreamOptions

220

)

221

222

client = Client(token="your-api-token")

223

224

# Simple chat conversation

225

messages = [

226

TextMessage(role=Role.System, content="You are a helpful AI assistant."),

227

TextMessage(role=Role.User, content="What is the capital of Spain?")

228

]

229

230

request = ChatRequest(

231

model="luminous-extended",

232

messages=messages,

233

maximum_tokens=100,

234

temperature=0.7

235

)

236

237

response = client.chat(request, model="luminous-extended")

238

print(f"Assistant: {response.message.content}")

239

print(f"Finish reason: {response.finish_reason}")

240

241

# Multi-turn conversation

242

conversation = [

243

TextMessage(role=Role.System, content="You are a helpful coding assistant."),

244

TextMessage(role=Role.User, content="How do I reverse a string in Python?"),

245

]

246

247

# Get first response

248

request = ChatRequest(model="luminous-extended", messages=conversation)

249

response = client.chat(request, model="luminous-extended")

250

251

# Add assistant response to conversation

252

conversation.append(response.message)

253

254

# Continue conversation

255

conversation.append(TextMessage(

256

role=Role.User,

257

content="Can you show me a more efficient way?"

258

))

259

260

request = ChatRequest(model="luminous-extended", messages=conversation)

261

response = client.chat(request, model="luminous-extended")

262

print(f"Assistant: {response.message.content}")

263

264

# Multimodal chat with images

265

from aleph_alpha_client import Image

266

267

image = Image.from_file("chart.png")

268

multimodal_message = Message(

269

role=Role.User,

270

content=["What trends do you see in this chart?", image]

271

)

272

273

request = ChatRequest(

274

model="luminous-extended",

275

messages=[multimodal_message],

276

maximum_tokens=200

277

)

278

279

response = client.chat(request, model="luminous-extended")

280

print(response.message.content)

281

282

# Streaming chat (async)

283

import asyncio

284

285

async def streaming_chat():

286

async with AsyncClient(token="your-api-token") as client:

287

messages = [

288

TextMessage(role=Role.User, content="Tell me a story about robots.")

289

]

290

291

request = ChatRequest(

292

model="luminous-extended",

293

messages=messages,

294

maximum_tokens=300,

295

temperature=0.8,

296

stream_options=StreamOptions(include_usage=True)

297

)

298

299

print("Assistant: ", end="", flush=True)

300

async for item in client.chat_with_streaming(request, "luminous-extended"):

301

if isinstance(item, ChatStreamChunk):

302

print(item.content, end="", flush=True)

303

elif isinstance(item, Usage):

304

print(f"\nTokens used: {item.total_tokens}")

305

elif isinstance(item, FinishReason):

306

print(f"\nFinished: {item}")

307

308

asyncio.run(streaming_chat())

309

310

# Structured output (if supported)

311

from aleph_alpha_client import JSONSchema

312

313

# Define response schema

314

schema = JSONSchema(

315

name="story_analysis",

316

description="Analysis of a story",

317

schema={

318

"type": "object",

319

"properties": {

320

"theme": {"type": "string"},

321

"characters": {"type": "array", "items": {"type": "string"}},

322

"rating": {"type": "integer", "minimum": 1, "maximum": 10}

323

},

324

"required": ["theme", "characters", "rating"]

325

}

326

)

327

328

request = ChatRequest(

329

model="luminous-extended",

330

messages=[

331

TextMessage(role=Role.User, content="Analyze this story: [story text]")

332

],

333

response_format=schema

334

)

335

336

response = client.chat(request, model="luminous-extended")

337

# Response will be structured JSON matching the schema

338

```