or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

chat-interface.mdclient-management.mddocument-prompt-template.mdembeddings.mdevaluation.mdexplanations.mdindex.mdprompt-construction.mdsteering.mdstructured-output.mdtext-completion.mdtokenization.mdtranslation.mdutilities.md

explanations.mddocs/

0

# Model Explanations

1

2

Generate explanations for model predictions showing which parts of the input influenced the output, with configurable granularity and postprocessing. Provides interpretability features to understand how models make decisions across text, image, and multimodal inputs.

3

4

## Capabilities

5

6

### Explanation Requests

7

8

Configure explanation generation with flexible granularity controls and postprocessing options.

9

10

```python { .api }

11

class ExplanationRequest:

12

prompt: Prompt

13

target: str

14

contextual_control_threshold: Optional[float] = None

15

control_factor: Optional[float] = None

16

control_token_overlap: Optional[ControlTokenOverlap] = None

17

control_log_additive: Optional[bool] = None

18

prompt_granularity: Optional[Union[PromptGranularity, str, CustomGranularity]] = None

19

target_granularity: Optional[TargetGranularity] = None

20

postprocessing: Optional[ExplanationPostprocessing] = None

21

normalize: Optional[bool] = None

22

"""

23

Request for model explanation generation.

24

25

Attributes:

26

- prompt: Input prompt to explain (text, image, or multimodal)

27

- target: Target text to generate explanations for

28

- contextual_control_threshold: Threshold for attention controls

29

- control_factor: Factor for attention control strength

30

- control_token_overlap: How to handle partial token overlap

31

- control_log_additive: Method for applying attention controls

32

- prompt_granularity: Level of detail for prompt explanations

33

- target_granularity: Level of detail for target explanations

34

- postprocessing: Score transformation options

35

- normalize: Normalize explanation scores

36

"""

37

38

def explain(self, request: ExplanationRequest, model: str) -> ExplanationResponse:

39

"""

40

Generate model explanations.

41

42

Parameters:

43

- request: Explanation configuration

44

- model: Model name to use for explanations

45

46

Returns:

47

ExplanationResponse with detailed explanations

48

"""

49

```

50

51

### Explanation Responses

52

53

Structured response containing detailed explanations with utility methods for coordinate conversion and text integration.

54

55

```python { .api }

56

class ExplanationResponse:

57

model_version: str

58

explanations: List[Explanation]

59

"""

60

Response from explanation request.

61

62

Attributes:

63

- model_version: Version of model used

64

- explanations: List of explanation objects for target segments

65

"""

66

67

def with_image_prompt_items_in_pixels(self, prompt: Prompt) -> ExplanationResponse:

68

"""

69

Convert image coordinates from normalized to pixel coordinates.

70

71

Parameters:

72

- prompt: Original prompt with image dimensions

73

74

Returns:

75

ExplanationResponse with pixel coordinates

76

"""

77

78

def with_text_from_prompt(self, request: ExplanationRequest) -> ExplanationResponse:

79

"""

80

Add text content to explanations for easier interpretation.

81

82

Parameters:

83

- request: Original explanation request

84

85

Returns:

86

ExplanationResponse with text content added

87

"""

88

```

89

90

### Individual Explanations

91

92

Detailed explanation for each target segment with support for different prompt item types.

93

94

```python { .api }

95

class Explanation:

96

target: str

97

items: List[Union[

98

TextPromptItemExplanation,

99

ImagePromptItemExplanation,

100

TokenPromptItemExplanation,

101

TargetPromptItemExplanation

102

]]

103

"""

104

Explanation for a target text segment.

105

106

Attributes:

107

- target: Target text portion being explained

108

- items: Explanations for each prompt item type

109

"""

110

111

def with_image_prompt_items_in_pixels(self, prompt: Prompt) -> Explanation:

112

"""Convert image coordinates to pixel coordinates."""

113

114

def with_text_from_prompt(self, prompt: Prompt, target: str) -> Explanation:

115

"""Add text content for easier interpretation."""

116

```

117

118

### Explanation Item Types

119

120

Different explanation types for various prompt content types.

121

122

```python { .api }

123

class TextPromptItemExplanation:

124

scores: List[Union[TextScore, TextScoreWithRaw]]

125

"""

126

Explanation for text prompt items.

127

128

Attributes:

129

- scores: Importance scores for text segments

130

"""

131

132

class ImagePromptItemExplanation:

133

scores: List[ImageScore]

134

"""

135

Explanation for image prompt items.

136

137

Attributes:

138

- scores: Importance scores for image regions

139

"""

140

141

def in_pixels(self, prompt_item: PromptItem) -> ImagePromptItemExplanation:

142

"""Convert coordinates to pixel values."""

143

144

class TokenPromptItemExplanation:

145

scores: List[TokenScore]

146

"""

147

Explanation for token prompt items.

148

149

Attributes:

150

- scores: Importance scores for individual tokens

151

"""

152

153

class TargetPromptItemExplanation:

154

scores: List[Union[TargetScore, TargetScoreWithRaw]]

155

"""

156

Explanation for target text segments.

157

158

Attributes:

159

- scores: Importance scores for target text parts

160

"""

161

```

162

163

### Score Types

164

165

Detailed scoring structures for different content types with positional information.

166

167

```python { .api }

168

class TextScore:

169

start: int

170

length: int

171

score: float

172

"""

173

Importance score for text segment.

174

175

Attributes:

176

- start: Starting character index

177

- length: Length in characters

178

- score: Importance score (higher = more important)

179

"""

180

181

class ImageScore:

182

left: float

183

top: float

184

width: float

185

height: float

186

score: float

187

"""

188

Importance score for image region.

189

190

Attributes:

191

- left: Left coordinate (0-1, normalized)

192

- top: Top coordinate (0-1, normalized)

193

- width: Width (0-1, normalized)

194

- height: Height (0-1, normalized)

195

- score: Importance score

196

"""

197

198

class TokenScore:

199

score: float

200

"""

201

Importance score for individual token.

202

203

Attributes:

204

- score: Importance score for this token

205

"""

206

207

class TargetScore:

208

start: int

209

length: int

210

score: float

211

"""

212

Importance score for target text segment.

213

214

Attributes:

215

- start: Starting character index in target

216

- length: Length in characters

217

- score: Importance score

218

"""

219

```

220

221

### Granularity Controls

222

223

Configuration options for explanation detail level and scope.

224

225

```python { .api }

226

class PromptGranularity(Enum):

227

Token = "token" # Token-level explanations

228

Word = "word" # Word-level explanations

229

Sentence = "sentence" # Sentence-level explanations

230

Paragraph = "paragraph" # Paragraph-level explanations

231

232

class TargetGranularity(Enum):

233

Complete = "complete" # Explain complete target

234

Token = "token" # Per-token target explanations

235

236

class CustomGranularity:

237

delimiter: str

238

"""

239

Custom granularity specification.

240

241

Attributes:

242

- delimiter: Custom delimiter for text splitting

243

"""

244

245

class ExplanationPostprocessing(Enum):

246

Square = "square" # Square each score

247

Absolute = "absolute" # Absolute value of each score

248

```

249

250

### Usage Examples

251

252

Comprehensive examples showing different explanation use cases and configurations:

253

254

```python

255

from aleph_alpha_client import (

256

Client, ExplanationRequest, ExplanationResponse,

257

Prompt, Text, Image,

258

PromptGranularity, TargetGranularity, CustomGranularity,

259

ExplanationPostprocessing

260

)

261

262

client = Client(token="your-api-token")

263

264

# Basic text explanation

265

prompt = Prompt.from_text("The cat sat on the mat and looked around.")

266

request = ExplanationRequest(

267

prompt=prompt,

268

target="sat on the mat",

269

prompt_granularity=PromptGranularity.Word,

270

target_granularity=TargetGranularity.Complete

271

)

272

273

response = client.explain(request, model="luminous-extended")

274

275

# Process explanations

276

for explanation in response.explanations:

277

print(f"Target: '{explanation.target}'")

278

279

for item in explanation.items:

280

if isinstance(item, TextPromptItemExplanation):

281

for score in item.scores:

282

text_segment = prompt.items[0].text[score.start:score.start + score.length]

283

print(f" '{text_segment}': {score.score:.3f}")

284

285

# Enhanced explanation with text content

286

enhanced_response = response.with_text_from_prompt(request)

287

print("Enhanced explanation includes text content")

288

289

# Multimodal explanation (text + image)

290

image = Image.from_file("scene.jpg")

291

multimodal_prompt = Prompt([

292

Text.from_text("This image shows a beautiful landscape with"),

293

image,

294

Text.from_text("mountains in the background.")

295

])

296

297

multimodal_request = ExplanationRequest(

298

prompt=multimodal_prompt,

299

target="landscape",

300

prompt_granularity=PromptGranularity.Word,

301

postprocessing=ExplanationPostprocessing.Absolute

302

)

303

304

multimodal_response = client.explain(multimodal_request, model="luminous-extended")

305

306

for explanation in multimodal_response.explanations:

307

for item in explanation.items:

308

if isinstance(item, TextPromptItemExplanation):

309

print("Text explanations:")

310

for score in item.scores:

311

print(f" Text score: {score.score:.3f}")

312

elif isinstance(item, ImagePromptItemExplanation):

313

print("Image explanations:")

314

for score in item.scores:

315

print(f" Region ({score.left:.2f}, {score.top:.2f}) "

316

f"size ({score.width:.2f}x{score.height:.2f}): {score.score:.3f}")

317

318

# Convert image coordinates to pixels

319

pixel_response = multimodal_response.with_image_prompt_items_in_pixels(multimodal_prompt)

320

print("Converted to pixel coordinates")

321

322

# Fine-grained token-level explanation

323

token_request = ExplanationRequest(

324

prompt=Prompt.from_text("Machine learning revolutionizes data analysis."),

325

target="revolutionizes",

326

prompt_granularity=PromptGranularity.Token,

327

target_granularity=TargetGranularity.Token,

328

normalize=True

329

)

330

331

token_response = client.explain(token_request, model="luminous-extended")

332

333

for explanation in token_response.explanations:

334

print(f"Token-level explanation for: '{explanation.target}'")

335

for item in explanation.items:

336

if isinstance(item, TokenPromptItemExplanation):

337

for i, score in enumerate(item.scores):

338

print(f" Token {i}: {score.score:.3f}")

339

340

# Custom granularity with specific delimiter

341

custom_request = ExplanationRequest(

342

prompt=Prompt.from_text("First clause; second clause; third clause."),

343

target="second clause",

344

prompt_granularity=CustomGranularity(delimiter=";"),

345

postprocessing=ExplanationPostprocessing.Square

346

)

347

348

custom_response = client.explain(custom_request, model="luminous-extended")

349

350

# Explanation with attention controls

351

from aleph_alpha_client import TextControl, ControlTokenOverlap

352

353

controlled_text = Text(

354

text="Important information is highlighted here.",

355

controls=[

356

TextControl(

357

start=0, length=9, # "Important"

358

factor=2.0,

359

token_overlap=ControlTokenOverlap.Complete

360

)

361

]

362

)

363

364

controlled_request = ExplanationRequest(

365

prompt=Prompt([controlled_text]),

366

target="highlighted",

367

prompt_granularity=PromptGranularity.Word,

368

control_factor=1.5,

369

control_log_additive=True

370

)

371

372

controlled_response = client.explain(controlled_request, model="luminous-extended")

373

374

# Compare explanations with and without controls

375

print("Explanation shows impact of attention controls")

376

377

# Batch explanation analysis

378

def analyze_explanations(response: ExplanationResponse, threshold: float = 0.1):

379

"""Analyze explanations to find most important segments."""

380

important_segments = []

381

382

for explanation in response.explanations:

383

for item in explanation.items:

384

if isinstance(item, TextPromptItemExplanation):

385

for score in item.scores:

386

if abs(score.score) > threshold:

387

important_segments.append({

388

'target': explanation.target,

389

'start': score.start,

390

'length': score.length,

391

'score': score.score

392

})

393

394

return sorted(important_segments, key=lambda x: abs(x['score']), reverse=True)

395

396

# Analyze most important segments

397

important = analyze_explanations(response, threshold=0.05)

398

print("Most important segments:")

399

for segment in important[:5]: # Top 5

400

print(f" Score {segment['score']:.3f}: chars {segment['start']}-{segment['start']+segment['length']}")

401

```