CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-interpret

Fit interpretable models and explain blackbox machine learning with comprehensive interpretability tools.

Pending
Overview
Eval results
Files

blackbox.mddocs/

Blackbox Explanation

Model-agnostic explanation methods that can provide interpretability for any machine learning model, regardless of its internal structure. These methods work by analyzing the relationship between inputs and outputs without requiring access to model internals.

Capabilities

LIME (Local Interpretable Model-agnostic Explanations)

Explains individual predictions by approximating the model locally with an interpretable model. LIME generates perturbations around the instance being explained and fits a local linear model to understand feature contributions.

class LimeTabular:
    def __init__(
        self,
        model,
        data,
        feature_names=None,
        feature_types=None,
        **kwargs
    ):
        """
        LIME explainer for tabular data.
        
        Parameters:
            model (callable): Model or prediction function (predict_proba for classification, predict for regression)
            data (array-like): Training data for generating perturbations
            feature_names (list, optional): Names for features
            feature_types (list, optional): Types for features
            **kwargs: Additional arguments passed to underlying LIME explainer
        """
    
    def explain_local(self, X, y=None, name=None, **kwargs):
        """
        Generate local explanations for instances.
        
        Parameters:
            X (array-like): Instances to explain
            y (array-like, optional): True labels
            name (str, optional): Name for explanation
            **kwargs: Additional arguments passed to underlying LIME explainer
            
        Returns:
            Explanation object with local feature contributions
        """

SHAP (SHapley Additive exPlanations)

Unified framework for model explanation based on cooperative game theory. SHAP provides both local and global explanations by computing Shapley values for feature contributions.

class ShapKernel:
    def __init__(
        self,
        predict_fn,
        data,
        link='identity',
        feature_names=None,
        **kwargs
    ):
        """
        SHAP kernel explainer for any model.
        
        Parameters:
            predict_fn (callable): Model prediction function
            data (array-like): Background data for computing baselines
            link (str): Link function ('identity', 'logit')
            feature_names (list, optional): Names for features
            **kwargs: Additional arguments for KernelExplainer
        """
    
    def explain_local(self, X, y=None, name=None, **kwargs):
        """
        Generate SHAP explanations for instances.
        
        Parameters:
            X (array-like): Instances to explain
            y (array-like, optional): True labels
            name (str, optional): Name for explanation
            **kwargs: Additional arguments for explain method
            
        Returns:
            Explanation object with SHAP values
        """
    
    def explain_global(self, name=None):
        """
        Generate global SHAP summary.
        
        Parameters:
            name (str, optional): Name for explanation
            
        Returns:
            Global explanation with feature importance rankings
        """

Partial Dependence

Shows the marginal effect of features on the prediction outcome by averaging out the effects of all other features. Useful for understanding how individual features or feature pairs influence model predictions.

class PartialDependence:
    def __init__(
        self,
        predict_fn,
        data,
        feature_names=None,
        feature_types=None,
        sampler=None,
        **kwargs
    ):
        """
        Partial dependence explainer.
        
        Parameters:
            predict_fn (callable): Model prediction function
            data (array-like): Training data
            feature_names (list, optional): Names for features
            feature_types (list, optional): Types for features
            sampler (callable, optional): Custom sampling strategy
            **kwargs: Additional arguments
        """
    
    def explain_global(self, name=None, features=None, interactions=None, grid_resolution=100, **kwargs):
        """
        Generate partial dependence plots.
        
        Parameters:
            name (str, optional): Name for explanation
            features (list, optional): Features to analyze
            interactions (list, optional): Feature pairs for interaction plots
            grid_resolution (int): Resolution of feature grid
            **kwargs: Additional arguments
            
        Returns:
            Global explanation with partial dependence curves
        """

Sensitivity Analysis

Morris sensitivity analysis for understanding feature importance through variance-based decomposition. Useful for identifying the most influential features and their interactions.

class MorrisSensitivity:
    def __init__(
        self,
        predict_fn,
        data,
        feature_names=None,
        feature_types=None,
        **kwargs
    ):
        """
        Morris sensitivity analysis explainer.
        
        Parameters:
            predict_fn (callable): Model prediction function
            data (array-like): Training data for bounds
            feature_names (list, optional): Names for features
            feature_types (list, optional): Types for features
            **kwargs: Additional arguments
        """
    
    def explain_global(self, name=None, num_trajectories=10, grid_jump=0.5, **kwargs):
        """
        Generate Morris sensitivity analysis.
        
        Parameters:
            name (str, optional): Name for explanation
            num_trajectories (int): Number of Morris trajectories
            grid_jump (float): Size of grid jumps (0-1)
            **kwargs: Additional arguments
            
        Returns:
            Global explanation with sensitivity indices
        """

Usage Examples

Explaining a Random Forest with LIME

from interpret.blackbox import LimeTabular
from interpret import show
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_wine
from sklearn.model_selection import train_test_split

# Load data and train model
data = load_wine()
X_train, X_test, y_train, y_test = train_test_split(
    data.data, data.target, test_size=0.2, random_state=42
)

rf = RandomForestClassifier(n_estimators=100, random_state=42)
rf.fit(X_train, y_train)

# Create LIME explainer
lime = LimeTabular(
    predict_fn=rf.predict_proba,
    data=X_train,
    feature_names=data.feature_names,
    class_names=data.target_names,
    mode='classification'
)

# Explain individual predictions
explanation = lime.explain_local(X_test[:5], y_test[:5])
show(explanation)

Global Analysis with Partial Dependence

from interpret.blackbox import PartialDependence
from interpret import show
import numpy as np

# Create partial dependence explainer
pdp = PartialDependence(
    predict_fn=rf.predict_proba,
    data=X_train,
    feature_names=data.feature_names
)

# Analyze main effects
pdp_global = pdp.explain_global(
    features=[0, 1, 2],  # First three features
    grid_resolution=50
)
show(pdp_global)

# Analyze interactions
pdp_interactions = pdp.explain_global(
    interactions=[(0, 1), (1, 2)],  # Feature pairs
    grid_resolution=25
)
show(pdp_interactions)

SHAP Analysis Workflow

from interpret.blackbox import ShapKernel
from interpret import show
import shap

# Create SHAP explainer
shap_explainer = ShapKernel(
    predict_fn=rf.predict_proba,
    data=X_train[:100],  # Sample background data
    feature_names=data.feature_names
)

# Get local explanations
shap_local = shap_explainer.explain_local(X_test[:10])
show(shap_local)

# Get global summary
shap_global = shap_explainer.explain_global()
show(shap_global)

Sensitivity Analysis

from interpret.blackbox import MorrisSensitivity
from interpret import show

# Create sensitivity analyzer
morris = MorrisSensitivity(
    predict_fn=lambda x: rf.predict_proba(x)[:, 1],  # Probability of class 1
    data=X_train,
    feature_names=data.feature_names
)

# Perform sensitivity analysis
sensitivity = morris.explain_global(
    num_trajectories=20,
    grid_jump=0.5
)
show(sensitivity)

Comparing Explanation Methods

# Compare LIME and SHAP on same instances
instances = X_test[:3]
true_labels = y_test[:3]

# LIME explanations
lime_exp = lime.explain_local(instances, true_labels, name="LIME")
show(lime_exp)

# SHAP explanations  
shap_exp = shap_explainer.explain_local(instances, name="SHAP")
show(shap_exp)

# Global methods
pdp_exp = pdp.explain_global(name="Partial Dependence")
show(pdp_exp)

sensitivity_exp = morris.explain_global(name="Morris Sensitivity")
show(sensitivity_exp)

Install with Tessl CLI

npx tessl i tessl/pypi-interpret

docs

blackbox.md

data.md

glassbox.md

greybox.md

index.md

performance.md

privacy.md

utils.md

visualization.md

tile.json