or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

blackbox.mddata.mdglassbox.mdgreybox.mdindex.mdperformance.mdprivacy.mdutils.mdvisualization.md

blackbox.mddocs/

0

# Blackbox Explanation

1

2

Model-agnostic explanation methods that can provide interpretability for any machine learning model, regardless of its internal structure. These methods work by analyzing the relationship between inputs and outputs without requiring access to model internals.

3

4

## Capabilities

5

6

### LIME (Local Interpretable Model-agnostic Explanations)

7

8

Explains individual predictions by approximating the model locally with an interpretable model. LIME generates perturbations around the instance being explained and fits a local linear model to understand feature contributions.

9

10

```python { .api }

11

class LimeTabular:

12

def __init__(

13

self,

14

model,

15

data,

16

feature_names=None,

17

feature_types=None,

18

**kwargs

19

):

20

"""

21

LIME explainer for tabular data.

22

23

Parameters:

24

model (callable): Model or prediction function (predict_proba for classification, predict for regression)

25

data (array-like): Training data for generating perturbations

26

feature_names (list, optional): Names for features

27

feature_types (list, optional): Types for features

28

**kwargs: Additional arguments passed to underlying LIME explainer

29

"""

30

31

def explain_local(self, X, y=None, name=None, **kwargs):

32

"""

33

Generate local explanations for instances.

34

35

Parameters:

36

X (array-like): Instances to explain

37

y (array-like, optional): True labels

38

name (str, optional): Name for explanation

39

**kwargs: Additional arguments passed to underlying LIME explainer

40

41

Returns:

42

Explanation object with local feature contributions

43

"""

44

```

45

46

### SHAP (SHapley Additive exPlanations)

47

48

Unified framework for model explanation based on cooperative game theory. SHAP provides both local and global explanations by computing Shapley values for feature contributions.

49

50

```python { .api }

51

class ShapKernel:

52

def __init__(

53

self,

54

predict_fn,

55

data,

56

link='identity',

57

feature_names=None,

58

**kwargs

59

):

60

"""

61

SHAP kernel explainer for any model.

62

63

Parameters:

64

predict_fn (callable): Model prediction function

65

data (array-like): Background data for computing baselines

66

link (str): Link function ('identity', 'logit')

67

feature_names (list, optional): Names for features

68

**kwargs: Additional arguments for KernelExplainer

69

"""

70

71

def explain_local(self, X, y=None, name=None, **kwargs):

72

"""

73

Generate SHAP explanations for instances.

74

75

Parameters:

76

X (array-like): Instances to explain

77

y (array-like, optional): True labels

78

name (str, optional): Name for explanation

79

**kwargs: Additional arguments for explain method

80

81

Returns:

82

Explanation object with SHAP values

83

"""

84

85

def explain_global(self, name=None):

86

"""

87

Generate global SHAP summary.

88

89

Parameters:

90

name (str, optional): Name for explanation

91

92

Returns:

93

Global explanation with feature importance rankings

94

"""

95

```

96

97

### Partial Dependence

98

99

Shows the marginal effect of features on the prediction outcome by averaging out the effects of all other features. Useful for understanding how individual features or feature pairs influence model predictions.

100

101

```python { .api }

102

class PartialDependence:

103

def __init__(

104

self,

105

predict_fn,

106

data,

107

feature_names=None,

108

feature_types=None,

109

sampler=None,

110

**kwargs

111

):

112

"""

113

Partial dependence explainer.

114

115

Parameters:

116

predict_fn (callable): Model prediction function

117

data (array-like): Training data

118

feature_names (list, optional): Names for features

119

feature_types (list, optional): Types for features

120

sampler (callable, optional): Custom sampling strategy

121

**kwargs: Additional arguments

122

"""

123

124

def explain_global(self, name=None, features=None, interactions=None, grid_resolution=100, **kwargs):

125

"""

126

Generate partial dependence plots.

127

128

Parameters:

129

name (str, optional): Name for explanation

130

features (list, optional): Features to analyze

131

interactions (list, optional): Feature pairs for interaction plots

132

grid_resolution (int): Resolution of feature grid

133

**kwargs: Additional arguments

134

135

Returns:

136

Global explanation with partial dependence curves

137

"""

138

```

139

140

### Sensitivity Analysis

141

142

Morris sensitivity analysis for understanding feature importance through variance-based decomposition. Useful for identifying the most influential features and their interactions.

143

144

```python { .api }

145

class MorrisSensitivity:

146

def __init__(

147

self,

148

predict_fn,

149

data,

150

feature_names=None,

151

feature_types=None,

152

**kwargs

153

):

154

"""

155

Morris sensitivity analysis explainer.

156

157

Parameters:

158

predict_fn (callable): Model prediction function

159

data (array-like): Training data for bounds

160

feature_names (list, optional): Names for features

161

feature_types (list, optional): Types for features

162

**kwargs: Additional arguments

163

"""

164

165

def explain_global(self, name=None, num_trajectories=10, grid_jump=0.5, **kwargs):

166

"""

167

Generate Morris sensitivity analysis.

168

169

Parameters:

170

name (str, optional): Name for explanation

171

num_trajectories (int): Number of Morris trajectories

172

grid_jump (float): Size of grid jumps (0-1)

173

**kwargs: Additional arguments

174

175

Returns:

176

Global explanation with sensitivity indices

177

"""

178

```

179

180

## Usage Examples

181

182

### Explaining a Random Forest with LIME

183

184

```python

185

from interpret.blackbox import LimeTabular

186

from interpret import show

187

from sklearn.ensemble import RandomForestClassifier

188

from sklearn.datasets import load_wine

189

from sklearn.model_selection import train_test_split

190

191

# Load data and train model

192

data = load_wine()

193

X_train, X_test, y_train, y_test = train_test_split(

194

data.data, data.target, test_size=0.2, random_state=42

195

)

196

197

rf = RandomForestClassifier(n_estimators=100, random_state=42)

198

rf.fit(X_train, y_train)

199

200

# Create LIME explainer

201

lime = LimeTabular(

202

predict_fn=rf.predict_proba,

203

data=X_train,

204

feature_names=data.feature_names,

205

class_names=data.target_names,

206

mode='classification'

207

)

208

209

# Explain individual predictions

210

explanation = lime.explain_local(X_test[:5], y_test[:5])

211

show(explanation)

212

```

213

214

### Global Analysis with Partial Dependence

215

216

```python

217

from interpret.blackbox import PartialDependence

218

from interpret import show

219

import numpy as np

220

221

# Create partial dependence explainer

222

pdp = PartialDependence(

223

predict_fn=rf.predict_proba,

224

data=X_train,

225

feature_names=data.feature_names

226

)

227

228

# Analyze main effects

229

pdp_global = pdp.explain_global(

230

features=[0, 1, 2], # First three features

231

grid_resolution=50

232

)

233

show(pdp_global)

234

235

# Analyze interactions

236

pdp_interactions = pdp.explain_global(

237

interactions=[(0, 1), (1, 2)], # Feature pairs

238

grid_resolution=25

239

)

240

show(pdp_interactions)

241

```

242

243

### SHAP Analysis Workflow

244

245

```python

246

from interpret.blackbox import ShapKernel

247

from interpret import show

248

import shap

249

250

# Create SHAP explainer

251

shap_explainer = ShapKernel(

252

predict_fn=rf.predict_proba,

253

data=X_train[:100], # Sample background data

254

feature_names=data.feature_names

255

)

256

257

# Get local explanations

258

shap_local = shap_explainer.explain_local(X_test[:10])

259

show(shap_local)

260

261

# Get global summary

262

shap_global = shap_explainer.explain_global()

263

show(shap_global)

264

```

265

266

### Sensitivity Analysis

267

268

```python

269

from interpret.blackbox import MorrisSensitivity

270

from interpret import show

271

272

# Create sensitivity analyzer

273

morris = MorrisSensitivity(

274

predict_fn=lambda x: rf.predict_proba(x)[:, 1], # Probability of class 1

275

data=X_train,

276

feature_names=data.feature_names

277

)

278

279

# Perform sensitivity analysis

280

sensitivity = morris.explain_global(

281

num_trajectories=20,

282

grid_jump=0.5

283

)

284

show(sensitivity)

285

```

286

287

### Comparing Explanation Methods

288

289

```python

290

# Compare LIME and SHAP on same instances

291

instances = X_test[:3]

292

true_labels = y_test[:3]

293

294

# LIME explanations

295

lime_exp = lime.explain_local(instances, true_labels, name="LIME")

296

show(lime_exp)

297

298

# SHAP explanations

299

shap_exp = shap_explainer.explain_local(instances, name="SHAP")

300

show(shap_exp)

301

302

# Global methods

303

pdp_exp = pdp.explain_global(name="Partial Dependence")

304

show(pdp_exp)

305

306

sensitivity_exp = morris.explain_global(name="Morris Sensitivity")

307

show(sensitivity_exp)

308

```