0
# Interpret
1
2
A comprehensive machine learning interpretability library that provides tools for training interpretable models and explaining both glassbox and blackbox machine learning systems. Interpret incorporates state-of-the-art techniques including Explainable Boosting Machine (EBM), SHAP, LIME, and sensitivity analysis, with built-in interactive visualization capabilities.
3
4
## Package Information
5
6
- **Package Name**: interpret
7
- **Language**: Python
8
- **Installation**: `pip install interpret`
9
10
## Core Imports
11
12
```python
13
import interpret
14
```
15
16
Common imports for interpretable models:
17
18
```python
19
from interpret.glassbox import ExplainableBoostingClassifier, ExplainableBoostingRegressor
20
```
21
22
Common imports for explaining blackbox models:
23
24
```python
25
from interpret.blackbox import LimeTabular, ShapKernel, PartialDependence
26
```
27
28
Common imports for visualization:
29
30
```python
31
from interpret import show, preserve
32
```
33
34
## Basic Usage
35
36
```python
37
from interpret.glassbox import ExplainableBoostingClassifier
38
from interpret.blackbox import LimeTabular
39
from interpret import show
40
from sklearn.datasets import make_classification
41
from sklearn.model_selection import train_test_split
42
from sklearn.ensemble import RandomForestClassifier
43
44
# Create sample data
45
X, y = make_classification(n_samples=1000, n_features=20, random_state=42)
46
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
47
48
# Train an interpretable model
49
ebm = ExplainableBoostingClassifier(random_state=42)
50
ebm.fit(X_train, y_train)
51
52
# Get global explanations
53
global_explanation = ebm.explain_global()
54
show(global_explanation)
55
56
# Get local explanations
57
local_explanation = ebm.explain_local(X_test[:5], y_test[:5])
58
show(local_explanation)
59
60
# Explain a blackbox model
61
rf = RandomForestClassifier(random_state=42)
62
rf.fit(X_train, y_train)
63
64
lime = LimeTabular(predict_fn=rf.predict_proba, data=X_train)
65
lime_explanation = lime.explain_local(X_test[:5], y_test[:5])
66
show(lime_explanation)
67
```
68
69
## Architecture
70
71
Interpret follows a consistent architectural pattern organized around explanation types and provider systems:
72
73
- **Explainer Classes**: All explainers inherit from `ExplainerMixin` and implement `fit()` and `explain_*()` methods
74
- **Explanation Objects**: All explanations inherit from `ExplanationMixin` providing consistent data access
75
- **Provider System**: Modular computation and visualization backends for scalability and customization
76
- **Extension System**: Plugin architecture supporting third-party explainers through entry points
77
78
## Capabilities
79
80
### Interpretable Models (Glassbox)
81
82
Inherently interpretable machine learning models that provide transparency by design, including Explainable Boosting Machine (EBM), linear models, decision trees, and decision lists.
83
84
```python { .api }
85
class ExplainableBoostingClassifier:
86
def __init__(self, **kwargs): ...
87
def fit(self, X, y): ...
88
def predict(self, X): ...
89
def explain_global(self): ...
90
def explain_local(self, X, y=None): ...
91
92
class ExplainableBoostingRegressor:
93
def __init__(self, **kwargs): ...
94
def fit(self, X, y): ...
95
def predict(self, X): ...
96
def explain_global(self): ...
97
def explain_local(self, X, y=None): ...
98
```
99
100
[Interpretable Models](./glassbox.md)
101
102
### Blackbox Explanation
103
104
Model-agnostic explanation methods for any machine learning model, including LIME, SHAP, partial dependence plots, and sensitivity analysis.
105
106
```python { .api }
107
class LimeTabular:
108
def __init__(self, predict_fn, data, **kwargs): ...
109
def explain_local(self, X, y=None): ...
110
111
class ShapKernel:
112
def __init__(self, predict_fn, data, **kwargs): ...
113
def explain_local(self, X, y=None): ...
114
115
class PartialDependence:
116
def __init__(self, predict_fn, data, **kwargs): ...
117
def explain_global(self): ...
118
```
119
120
[Blackbox Explanation](./blackbox.md)
121
122
### Tree-Specific Explanation (Greybox)
123
124
Specialized explanation methods optimized for tree-based models, providing efficient and accurate explanations for decision trees, random forests, and gradient boosting models.
125
126
```python { .api }
127
class ShapTree:
128
def __init__(self, model, data, **kwargs): ...
129
def explain_local(self, X): ...
130
def explain_global(self): ...
131
```
132
133
[Tree-Specific Explanation](./greybox.md)
134
135
### Data Analysis
136
137
Tools for understanding dataset characteristics and feature distributions to inform model selection and feature engineering decisions.
138
139
```python { .api }
140
class ClassHistogram:
141
def __init__(self): ...
142
def explain_data(self, X, y): ...
143
144
class Marginal:
145
def __init__(self): ...
146
def explain_data(self, X, y=None): ...
147
```
148
149
[Data Analysis](./data.md)
150
151
### Performance Evaluation
152
153
Comprehensive model performance analysis tools including ROC curves, PR curves, and regression metrics with interactive visualizations.
154
155
```python { .api }
156
class ROC:
157
def __init__(self): ...
158
def explain_perf(self, y_true, y_prob): ...
159
160
class PR:
161
def __init__(self): ...
162
def explain_perf(self, y_true, y_prob): ...
163
164
class RegressionPerf:
165
def __init__(self): ...
166
def explain_perf(self, y_true, y_pred): ...
167
```
168
169
[Performance Evaluation](./performance.md)
170
171
### Visualization and Interaction
172
173
Interactive visualization system with multiple backends, preservation capabilities, and server management for dashboard applications.
174
175
```python { .api }
176
def show(explanation): ...
177
def preserve(explanation): ...
178
def set_visualize_provider(provider): ...
179
def init_show_server(): ...
180
def shutdown_show_server(): ...
181
```
182
183
[Visualization](./visualization.md)
184
185
### Privacy-Preserving ML
186
187
Differentially private machine learning models that provide formal privacy guarantees while maintaining interpretability.
188
189
```python { .api }
190
class DPExplainableBoostingClassifier:
191
def __init__(self, epsilon=1.0, **kwargs): ...
192
def fit(self, X, y): ...
193
def explain_global(self): ...
194
```
195
196
[Privacy-Preserving ML](./privacy.md)
197
198
### Utilities and Advanced Features
199
200
Utility functions for data preprocessing, feature interaction analysis, synthetic data generation, and development tools.
201
202
```python { .api }
203
def measure_interactions(X, y): ...
204
def make_synthetic(n_samples): ...
205
class EBMPreprocessor:
206
def __init__(self): ...
207
def fit_transform(self, X): ...
208
```
209
210
[Utilities](./utils.md)
211
212
## Types
213
214
```python { .api }
215
class ExplainerMixin:
216
"""Abstract base class for all explainers."""
217
def fit(self, X, y): ...
218
def explain_global(self): ...
219
def explain_local(self, X, y=None): ...
220
221
class ExplanationMixin:
222
"""Abstract base class for all explanations."""
223
def data(self): ...
224
def visualize(self): ...
225
```