or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

tessl/pypi-aleph-alpha-client

Python client to interact with Aleph Alpha API endpoints

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/aleph-alpha-client@11.2.x

To install, run

npx @tessl/cli install tessl/pypi-aleph-alpha-client@11.2.0

0

# Aleph Alpha Client

1

2

A comprehensive Python client library for interacting with Aleph Alpha's language model APIs. The library provides both synchronous and asynchronous client implementations for accessing various AI capabilities including text completion, embeddings, semantic search, chat interfaces, and task-specific endpoints.

3

4

## Package Information

5

6

- **Package Name**: aleph-alpha-client

7

- **Language**: Python

8

- **Installation**: `pip install aleph-alpha-client`

9

- **Python Requirements**: >=3.9,<3.14

10

11

## Core Imports

12

13

```python

14

from aleph_alpha_client import Client, AsyncClient

15

```

16

17

Common imports for different functionality:

18

19

```python

20

from aleph_alpha_client import (

21

Client, AsyncClient,

22

CompletionRequest, CompletionResponse,

23

ChatRequest, ChatResponse, Message, TextMessage, Role,

24

Prompt, Text, Image, Tokens,

25

EmbeddingRequest, EmbeddingV2Request, SemanticEmbeddingRequest,

26

ExplanationRequest, EvaluationRequest,

27

TokenizationRequest, DetokenizationRequest,

28

TranslationRequest, TranslationResponse,

29

Document, PromptTemplate,

30

JSONSchema, ResponseFormat,

31

SteeringPairedExample, SteeringConceptCreationRequest,

32

load_base64_from_file, load_base64_from_url

33

)

34

```

35

36

## Basic Usage

37

38

```python

39

from aleph_alpha_client import Client, CompletionRequest, Prompt

40

41

# Initialize client

42

client = Client(token="your-api-token")

43

44

# Simple text completion

45

request = CompletionRequest(

46

prompt=Prompt.from_text("Tell me about artificial intelligence"),

47

maximum_tokens=100

48

)

49

response = client.complete(request, model="luminous-extended")

50

print(response.completions[0].completion)

51

52

# Async client usage

53

import asyncio

54

from aleph_alpha_client import AsyncClient

55

56

async def main():

57

async with AsyncClient(token="your-api-token") as client:

58

response = await client.complete(request, model="luminous-extended")

59

print(response.completions[0].completion)

60

61

asyncio.run(main())

62

```

63

64

## Architecture

65

66

The library is organized around several key components:

67

68

- **Client Classes**: `Client` and `AsyncClient` provide the main interface to Aleph Alpha APIs

69

- **Request/Response Objects**: Structured data classes for each API endpoint

70

- **Prompt System**: Flexible multimodal prompt construction with `Prompt`, `Text`, `Image`, and `Tokens`

71

- **Control Mechanisms**: Fine-grained attention control through `TextControl`, `ImageControl`, and `TokenControl`

72

- **Exception Handling**: Custom exceptions for quota limits (`QuotaError`) and service availability (`BusyError`)

73

74

This design enables comprehensive access to Aleph Alpha's language models while providing strong typing, async support, and advanced features like prompt templating, model explanations, and structured output generation.

75

76

## Capabilities

77

78

### Client Management

79

80

Core client classes for synchronous and asynchronous API access, with connection management, authentication, retry logic, and error handling.

81

82

```python { .api }

83

class Client:

84

def __init__(

85

self,

86

token: str,

87

host: str = "https://api.aleph-alpha.com",

88

hosting: Optional[str] = None,

89

request_timeout_seconds: int = 305,

90

total_retries: int = 8,

91

nice: bool = False,

92

verify_ssl: bool = True

93

): ...

94

95

class AsyncClient:

96

def __init__(

97

self,

98

token: str,

99

host: str = "https://api.aleph-alpha.com",

100

hosting: Optional[str] = None,

101

request_timeout_seconds: int = 305,

102

total_retries: int = 8,

103

nice: bool = False,

104

verify_ssl: bool = True

105

): ...

106

107

async def close(self) -> None: ...

108

```

109

110

[Client Management](./client-management.md)

111

112

### Prompt Construction

113

114

Flexible multimodal prompt system supporting text, images, and tokens with advanced attention control mechanisms.

115

116

```python { .api }

117

class Prompt:

118

def __init__(self, items: Union[str, Sequence[PromptItem]]): ...

119

120

@staticmethod

121

def from_text(text: str, controls: Optional[Sequence[TextControl]] = None) -> Prompt: ...

122

123

@staticmethod

124

def from_image(image: Image) -> Prompt: ...

125

126

class Text:

127

text: str

128

controls: Sequence[TextControl]

129

130

class Image:

131

base_64: str

132

cropping: Optional[Cropping]

133

controls: Sequence[ImageControl]

134

135

@classmethod

136

def from_file(cls, path: Union[str, Path], controls: Optional[Sequence[ImageControl]] = None) -> Image: ...

137

```

138

139

[Prompt Construction](./prompt-construction.md)

140

141

### Text Completion

142

143

Generate text continuations with extensive sampling controls, multiple completions, and streaming support.

144

145

```python { .api }

146

class CompletionRequest:

147

prompt: Prompt

148

maximum_tokens: Optional[int] = None

149

temperature: float = 0.0

150

top_k: int = 0

151

top_p: float = 0.0

152

presence_penalty: float = 0.0

153

frequency_penalty: float = 0.0

154

stop_sequences: Optional[List[str]] = None

155

n: int = 1

156

157

def complete(self, request: CompletionRequest, model: str) -> CompletionResponse: ...

158

async def complete_with_streaming(self, request: CompletionRequest, model: str) -> AsyncGenerator: ...

159

```

160

161

[Text Completion](./text-completion.md)

162

163

### Chat Interface

164

165

Conversational AI interface with message handling, streaming, and structured output support.

166

167

```python { .api }

168

class ChatRequest:

169

model: str

170

messages: Sequence[Union[Message, TextMessage]]

171

maximum_tokens: Optional[int] = None

172

temperature: Optional[float] = None

173

response_format: Optional[ResponseFormat] = None

174

175

class Message:

176

role: Role

177

content: Union[str, List[Union[str, Image]]]

178

179

def chat(self, request: ChatRequest, model: str) -> ChatResponse: ...

180

async def chat_with_streaming(self, request: ChatRequest, model: str) -> AsyncGenerator: ...

181

```

182

183

[Chat Interface](./chat-interface.md)

184

185

### Embeddings & Semantic Search

186

187

Generate vector embeddings for text and images with multiple representation types and batch processing capabilities.

188

189

```python { .api }

190

class SemanticEmbeddingRequest:

191

prompt: Prompt

192

representation: SemanticRepresentation

193

compress_to_size: Optional[int] = None

194

normalize: bool = False

195

196

class BatchSemanticEmbeddingRequest:

197

prompts: Sequence[Prompt]

198

representation: SemanticRepresentation

199

compress_to_size: Optional[int] = None

200

normalize: bool = False

201

202

def semantic_embed(self, request: SemanticEmbeddingRequest, model: str) -> SemanticEmbeddingResponse: ...

203

def batch_semantic_embed(self, request: BatchSemanticEmbeddingRequest, model: Optional[str] = None) -> BatchSemanticEmbeddingResponse: ...

204

```

205

206

[Embeddings & Semantic Search](./embeddings.md)

207

208

### Model Explanations

209

210

Generate explanations for model predictions showing which parts of the input influenced the output, with configurable granularity and postprocessing.

211

212

```python { .api }

213

class ExplanationRequest:

214

prompt: Prompt

215

target: str

216

prompt_granularity: Optional[Union[PromptGranularity, str, CustomGranularity]] = None

217

target_granularity: Optional[TargetGranularity] = None

218

postprocessing: Optional[ExplanationPostprocessing] = None

219

220

def explain(self, request: ExplanationRequest, model: str) -> ExplanationResponse: ...

221

```

222

223

[Model Explanations](./explanations.md)

224

225

### Tokenization & Text Processing

226

227

Convert between text and tokens, with support for different tokenization strategies and detokenization.

228

229

```python { .api }

230

class TokenizationRequest:

231

prompt: str

232

tokens: bool

233

token_ids: bool

234

235

class DetokenizationRequest:

236

token_ids: Sequence[int]

237

238

def tokenize(self, request: TokenizationRequest, model: str) -> TokenizationResponse: ...

239

def detokenize(self, request: DetokenizationRequest, model: str) -> DetokenizationResponse: ...

240

```

241

242

[Tokenization & Text Processing](./tokenization.md)

243

244

### Evaluation & Testing

245

246

Evaluate model performance against expected outputs with detailed metrics and analysis.

247

248

```python { .api }

249

class EvaluationRequest:

250

prompt: Prompt

251

completion_expected: str

252

253

def evaluate(self, request: EvaluationRequest, model: str) -> EvaluationResponse: ...

254

```

255

256

[Evaluation & Testing](./evaluation.md)

257

258

### Structured Output & Response Formatting

259

260

Generate structured JSON output that conforms to specified schemas, with support for both manual JSON schemas and automatic Pydantic model integration.

261

262

```python { .api }

263

class JSONSchema:

264

schema: Mapping[str, Any]

265

name: str

266

description: Optional[str] = None

267

strict: Optional[bool] = False

268

269

@classmethod

270

def from_pydantic(cls, model_class: Type[BaseModel]) -> JSONSchema: ...

271

272

ResponseFormat = Union[JSONSchema, Type[BaseModel]]

273

```

274

275

[Structured Output](./structured-output.md)

276

277

### Translation Services

278

279

Translate text between languages using Aleph Alpha's translation models with quality scoring and segment-level analysis.

280

281

```python { .api }

282

class TranslationRequest:

283

model: str

284

source: str

285

target_language: str

286

287

class TranslationResponse:

288

translation: str

289

score: float

290

segments: Optional[List[TranslationSegment]]

291

num_tokens_prompt_total: int

292

num_tokens_generated: int

293

294

def translate(self, request: TranslationRequest) -> TranslationResponse: ...

295

```

296

297

[Translation Services](./translation.md)

298

299

### Steering & Content Control

300

301

Create and use steering concepts to guide model behavior and output style through positive and negative examples.

302

303

```python { .api }

304

class SteeringPairedExample:

305

negative: str

306

positive: str

307

308

class SteeringConceptCreationRequest:

309

examples: List[SteeringPairedExample]

310

311

def create_steering_concept(self, request: SteeringConceptCreationRequest) -> SteeringConceptCreationResponse: ...

312

```

313

314

[Steering & Content Control](./steering.md)

315

316

### Document & Prompt Templates

317

318

Advanced prompt construction tools supporting DOCX documents and reusable template generation with Liquid syntax.

319

320

```python { .api }

321

class Document:

322

@classmethod

323

def from_docx_file(cls, path: str) -> Document: ...

324

325

@classmethod

326

def from_text(cls, text: str) -> Document: ...

327

328

class PromptTemplate:

329

def __init__(self, template_str: str): ...

330

331

def placeholder(self, prompt_item: Union[Image, Tokens]) -> Placeholder: ...

332

333

def to_prompt(self, **kwargs) -> Prompt: ...

334

```

335

336

[Document & Prompt Templates](./document-prompt-template.md)

337

338

### Utility Functions

339

340

Helper functions for loading and encoding media files from various sources for use in multimodal prompts.

341

342

```python { .api }

343

def load_base64_from_file(path_and_filename: str) -> str: ...

344

345

def load_base64_from_url(url: str) -> str: ...

346

```

347

348

[Utility Functions](./utilities.md)

349

350

## Types

351

352

```python { .api }

353

# Core enums

354

class Role(str, Enum):

355

User = "user"

356

Assistant = "assistant"

357

System = "system"

358

359

class SemanticRepresentation(Enum):

360

Symmetric = "symmetric"

361

Document = "document"

362

Query = "query"

363

364

class ControlTokenOverlap(Enum):

365

Partial = "partial"

366

Complete = "complete"

367

368

class FinishReason(str, Enum):

369

Stop = "stop"

370

Length = "length"

371

ContentFilter = "content_filter"

372

373

# Structured output types

374

ResponseFormat = Union[JSONSchema, Type[BaseModel]]

375

376

# Steering types

377

class SteeringPairedExample:

378

negative: str

379

positive: str

380

381

class SteeringConceptCreationResponse:

382

id: str

383

384

# Translation types

385

class TranslationSegment:

386

source: str

387

translation: str

388

score: float

389

390

# Template types

391

Placeholder = NewType("Placeholder", UUID)

392

393

# Exception classes

394

class QuotaError(Exception): ...

395

class BusyError(Exception): ...

396

397

# Constants

398

POOLING_OPTIONS: List[str] = ["mean", "max", "last_token", "abs_max"]

399

```