CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-fairlearn

A Python package to assess and improve fairness of machine learning models

Pending
Overview
Eval results
Files

postprocessing.mddocs/

Postprocessing

Postprocessing techniques that adjust trained model outputs to satisfy fairness constraints without retraining. These methods optimize decision thresholds across groups to achieve fairness while working with any pre-trained classifier.

Capabilities

ThresholdOptimizer

Optimizes decision thresholds to satisfy fairness constraints by adjusting the classification boundary for different groups. This approach can achieve various fairness criteria without retraining the underlying model.

class ThresholdOptimizer:
    def __init__(self, *, estimator=None, constraints="demographic_parity",
                 objective="accuracy_score", grid_size=1000,
                 flip=False, prefit=False, predict_method="auto"):
        """
        Optimize decision thresholds to satisfy fairness constraints.

        Parameters:
        - estimator: sklearn estimator, pre-trained classifier (optional if prefit=True)
        - constraints: str or Moment, fairness constraint to satisfy
          Options: "demographic_parity", "equalized_odds", "equal_opportunity"
        - objective: str or callable, objective function to optimize
          Options: "accuracy_score", "balanced_accuracy_score", "selection_rate", "roc_auc_score" 
        - grid_size: int, number of threshold values to consider
        - flip: bool, whether to allow label flipping
        - prefit: bool, whether estimator is already fitted
        - predict_method: str, method to use for generating predictions ("auto", "predict_proba", "decision_function")
        """
    
    def fit(self, X, y, *, sensitive_features, sample_weight=None, **kwargs):
        """
        Fit the threshold optimizer.
        
        Parameters:
        - X: array-like, feature matrix
        - y: array-like, true target values
        - sensitive_features: array-like, sensitive feature values
        - sample_weight: array-like, optional sample weights
        - **kwargs: additional arguments passed to estimator.fit() if not prefit
        
        Returns:
        self
        """
    
    def predict(self, X, *, sensitive_features, random_state=None):
        """
        Make predictions using optimized thresholds.
        
        Parameters:
        - X: array-like, feature matrix
        - sensitive_features: array-like, sensitive feature values for test data
        - random_state: int, random state for reproducible results
        
        Returns:
        array-like: Binary predictions using optimized thresholds
        """
    
    @property
    def interpolated_thresholder_(self):
        """The fitted threshold interpolation object."""
        
    @property
    def solution_(self):
        """Details of the optimization solution."""

Usage Example

from fairlearn.postprocessing import ThresholdOptimizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split

# Train a base model
base_model = LogisticRegression()
base_model.fit(X_train, y_train)

# Create threshold optimizer for demographic parity
threshold_optimizer = ThresholdOptimizer(
    estimator=base_model,
    constraints="demographic_parity",
    objective="accuracy_score",
    prefit=True  # Model is already trained
)

# Fit the threshold optimizer
threshold_optimizer.fit(
    X_train, y_train,
    sensitive_features=sensitive_features_train
)

# Make fair predictions
fair_predictions = threshold_optimizer.predict(
    X_test,
    sensitive_features=sensitive_features_test
)

Plotting ThresholdOptimizer Results

Visualize the trade-offs discovered by the threshold optimizer.

def plot_threshold_optimizer(threshold_optimizer, *, ax=None, show_plot=True):
    """
    Plot the trade-off curve from threshold optimization.
    
    Parameters:
    - threshold_optimizer: fitted ThresholdOptimizer object
    - ax: matplotlib axis, optional axis to plot on
    - show_plot: bool, whether to display the plot
    
    Returns:
    matplotlib figure object
    """

Plotting Example

from fairlearn.postprocessing import plot_threshold_optimizer
import matplotlib.pyplot as plt

# After fitting threshold optimizer
plot_threshold_optimizer(threshold_optimizer)
plt.title("Fairness-Accuracy Trade-off")
plt.show()

Constraint Options

Demographic Parity

Ensures equal positive prediction rates across groups.

# Using string constraint
optimizer = ThresholdOptimizer(
    constraints="demographic_parity",
    objective="accuracy_score"
)

# The constraint ensures P(Y_hat=1 | A=a) is equal for all groups a

Equalized Odds

Ensures equal true positive and false positive rates across groups.

optimizer = ThresholdOptimizer(
    constraints="equalized_odds", 
    objective="balanced_accuracy_score"
)

# The constraint ensures both:
# - P(Y_hat=1 | Y=1, A=a) is equal for all groups a
# - P(Y_hat=1 | Y=0, A=a) is equal for all groups a

Equal Opportunity

Ensures equal true positive rates across groups.

optimizer = ThresholdOptimizer(
    constraints="equal_opportunity",
    objective="accuracy_score"  
)

# The constraint ensures P(Y_hat=1 | Y=1, A=a) is equal for all groups a

Objective Functions

Accuracy-based Objectives

# Standard accuracy
ThresholdOptimizer(objective="accuracy_score")

# Balanced accuracy (average of recall for each class)
ThresholdOptimizer(objective="balanced_accuracy_score")

Selection Rate Objective

# Optimize for overall selection rate
ThresholdOptimizer(objective="selection_rate")

ROC AUC Objective

# Optimize for area under ROC curve
ThresholdOptimizer(objective="roc_auc_score")

Custom Objectives

def custom_objective(y_true, y_pred):
    """Custom objective function."""
    return some_metric(y_true, y_pred)

ThresholdOptimizer(objective=custom_objective)

Advanced Usage

Working with Probability Predictions

The ThresholdOptimizer works with models that output probabilities:

from sklearn.ensemble import RandomForestClassifier

# Train probabilistic model
rf_model = RandomForestClassifier(n_estimators=100)
rf_model.fit(X_train, y_train)

# Threshold optimizer will use predict_proba internally
optimizer = ThresholdOptimizer(
    estimator=rf_model,
    constraints="demographic_parity",
    prefit=True
)

optimizer.fit(X_train, y_train, sensitive_features=A_train)

Multiple Sensitive Features

Handle multiple sensitive attributes simultaneously:

# Sensitive features as DataFrame with multiple columns
sensitive_features = pd.DataFrame({
    'gender': ['M', 'F', 'M', 'F'],
    'age_group': ['young', 'old', 'young', 'old']
})

optimizer = ThresholdOptimizer(constraints="demographic_parity")
optimizer.fit(X_train, y_train, sensitive_features=sensitive_features)

# Predictions will account for all sensitive feature combinations
predictions = optimizer.predict(X_test, sensitive_features=sensitive_features_test)

Controlling Randomization

For deterministic results when using randomized thresholding:

predictions = optimizer.predict(
    X_test, 
    sensitive_features=A_test,
    random_state=42
)

Accessing Optimization Details

# Get details about the optimization solution
solution = optimizer.solution_
print(f"Objective value: {solution['objective']}")
print(f"Constraint violation: {solution['constraint_violation']}")

# Access the interpolated thresholder
thresholder = optimizer.interpolated_thresholder_
print(f"Thresholds: {thresholder.interpolation_dict}")

Integration with Assessment

Combine with fairness assessment tools to evaluate results:

from fairlearn.metrics import MetricFrame, demographic_parity_difference

# Get predictions from optimized model
optimized_predictions = threshold_optimizer.predict(
    X_test, sensitive_features=A_test
)

# Assess fairness
fairness_frame = MetricFrame(
    metrics={
        'accuracy': lambda y, p: (y == p).mean(),
        'selection_rate': lambda y, p: p.mean()
    },
    y_true=y_test,
    y_pred=optimized_predictions,
    sensitive_features=A_test
)

print("Fairness assessment:")
print(fairness_frame.by_group)
print(f"Demographic parity difference: {demographic_parity_difference(y_test, optimized_predictions, sensitive_features=A_test)}")

Best Practices

Model Selection

  1. Base Model Quality: Start with a well-performing base model
  2. Probability Calibration: Ensure base model produces well-calibrated probabilities
  3. Validation: Use separate validation set for threshold optimization
# Recommended workflow
from sklearn.model_selection import train_test_split
from sklearn.calibration import CalibratedClassifierCV

# Split data into train/validation/test
X_train, X_temp, y_train, y_temp = train_test_split(X, y, test_size=0.4)
X_val, X_test, y_val, y_test = train_test_split(X_temp, y_temp, test_size=0.5)

# Train and calibrate base model
base_model = LogisticRegression()
calibrated_model = CalibratedClassifierCV(base_model, cv=3)
calibrated_model.fit(X_train, y_train)

# Optimize thresholds on validation set
optimizer = ThresholdOptimizer(
    estimator=calibrated_model,
    constraints="demographic_parity", 
    prefit=True
)
optimizer.fit(X_val, y_val, sensitive_features=A_val)

# Final evaluation on test set
final_predictions = optimizer.predict(X_test, sensitive_features=A_test)

Constraint Selection

Choose appropriate constraints based on your fairness requirements:

  • Demographic Parity: When equal representation is important
  • Equal Opportunity: When avoiding discrimination against qualified individuals is key
  • Equalized Odds: When both false positive and false negative rates matter

Performance Monitoring

Monitor both fairness and accuracy after threshold optimization:

def evaluate_postprocessed_model(optimizer, X_test, y_test, A_test):
    predictions = optimizer.predict(X_test, sensitive_features=A_test)
    
    # Accuracy metrics
    accuracy = (y_test == predictions).mean()
    
    # Fairness metrics
    dp_diff = demographic_parity_difference(y_test, predictions, sensitive_features=A_test)
    
    return {
        'accuracy': accuracy,
        'demographic_parity_difference': dp_diff,
        'predictions': predictions
    }

Install with Tessl CLI

npx tessl i tessl/pypi-fairlearn

docs

adversarial.md

assessment.md

datasets.md

index.md

postprocessing.md

preprocessing.md

reductions.md

tile.json