Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.
79
75%
Does it follow best practices?
Impact
80%
1.29xAverage score across 6 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/shap/SKILL.mdExplainer selection and multi-plot analysis
TreeExplainer used
100%
100%
Global bar plot
100%
100%
Beeswarm plot used
100%
100%
Waterfall for individual
100%
100%
Scatter with color
100%
100%
show=False on saved plots
100%
100%
High-quality save settings
30%
50%
plt.close() after save
100%
100%
uv pip install used
0%
0%
Correct plot order
100%
100%
Feature SHAP summary printed
100%
100%
Fairness analysis and cohort comparison
LinearExplainer for logistic regression
100%
100%
Background dataset size
100%
100%
Cohort dict bar plot
0%
100%
Per-group beeswarm plots
30%
100%
Protected attribute importance reported
100%
100%
Age group importance reported
100%
100%
Proxy feature analysis
100%
100%
Plots saved programmatically
80%
100%
uv pip install used
0%
0%
MLflow integration and model debugging
MLflow figure logged
0%
100%
shap_{feature} metric naming
100%
100%
TreeExplainer for gradient boosting
100%
100%
Feature clustering
0%
50%
Misclassified samples explained
100%
100%
Suspicious feature flagged
100%
100%
show=False with savefig
0%
37%
mlrun_summary.txt written
100%
100%
Large dataset sampling
42%
100%
uv pip install used
0%
0%
Black-box model explanation with KernelExplainer
KernelExplainer used
100%
100%
shap.sample() for background
0%
100%
Background size 50-300
100%
100%
Heatmap for multi-sample
100%
100%
Force plot with matplotlib
100%
100%
Scatter with color for interaction
0%
100%
show=False on saved plots
100%
100%
High-quality save settings
50%
50%
plt.close() after saves
100%
100%
uv pip install used
0%
0%
SHAP-guided feature engineering and model comparison
model_output probability
100%
100%
Scatter for nonlinear detection
33%
100%
Scatter color for interaction detection
30%
100%
Comparison bar with dict
0%
0%
Transformation applied
100%
100%
Interaction term created
100%
100%
TreeExplainer for XGBoost
100%
100%
Global before local
20%
20%
High-quality save settings
25%
25%
Report written
100%
100%
uv pip install used
50%
0%
Production explanation service with caching
joblib explainer save
0%
100%
joblib explainer load
0%
100%
ExplanationService class
100%
100%
predict_with_explanation method
0%
20%
get_top_features method
0%
20%
Force plot with matplotlib
100%
100%
TreeExplainer for Random Forest
100%
100%
show=False with savefig
50%
50%
plt.close() after saves
100%
100%
uv pip install used
0%
0%
086de41
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.