CtrlK
BlogDocsLog inGet started
Tessl Logo

shap

Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.

79

1.29x
Quality

75%

Does it follow best practices?

Impact

80%

1.29x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/shap/SKILL.md
SKILL.md
Quality
Evals
Security

Evaluation results

87%

2%

Customer Churn Explainability Report

Explainer selection and multi-plot analysis

Criteria
Without context
With context

TreeExplainer used

100%

100%

Global bar plot

100%

100%

Beeswarm plot used

100%

100%

Waterfall for individual

100%

100%

Scatter with color

100%

100%

show=False on saved plots

100%

100%

High-quality save settings

30%

50%

plt.close() after save

100%

100%

uv pip install used

0%

0%

Correct plot order

100%

100%

Feature SHAP summary printed

100%

100%

90%

24%

Loan Approval Model Fairness Audit

Fairness analysis and cohort comparison

Criteria
Without context
With context

LinearExplainer for logistic regression

100%

100%

Background dataset size

100%

100%

Cohort dict bar plot

0%

100%

Per-group beeswarm plots

30%

100%

Protected attribute importance reported

100%

100%

Age group importance reported

100%

100%

Proxy feature analysis

100%

100%

Plots saved programmatically

80%

100%

uv pip install used

0%

0%

81%

25%

SHAP-Powered MLflow Experiment Tracking and Model Validation

MLflow integration and model debugging

Criteria
Without context
With context

MLflow figure logged

0%

100%

shap_{feature} metric naming

100%

100%

TreeExplainer for gradient boosting

100%

100%

Feature clustering

0%

50%

Misclassified samples explained

100%

100%

Suspicious feature flagged

100%

100%

show=False with savefig

0%

37%

mlrun_summary.txt written

100%

100%

Large dataset sampling

42%

100%

uv pip install used

0%

0%

88%

20%

Explaining a Medical Diagnosis Support Model

Black-box model explanation with KernelExplainer

Criteria
Without context
With context

KernelExplainer used

100%

100%

shap.sample() for background

0%

100%

Background size 50-300

100%

100%

Heatmap for multi-sample

100%

100%

Force plot with matplotlib

100%

100%

Scatter with color for interaction

0%

100%

show=False on saved plots

100%

100%

High-quality save settings

50%

50%

plt.close() after saves

100%

100%

uv pip install used

0%

0%

69%

13%

Improving a Car Insurance Pricing Model

SHAP-guided feature engineering and model comparison

Criteria
Without context
With context

model_output probability

100%

100%

Scatter for nonlinear detection

33%

100%

Scatter color for interaction detection

30%

100%

Comparison bar with dict

0%

0%

Transformation applied

100%

100%

Interaction term created

100%

100%

TreeExplainer for XGBoost

100%

100%

Global before local

20%

20%

High-quality save settings

25%

25%

Report written

100%

100%

uv pip install used

50%

0%

70%

26%

Building a Real-Time Loan Decision Explanation Service

Production explanation service with caching

Criteria
Without context
With context

joblib explainer save

0%

100%

joblib explainer load

0%

100%

ExplanationService class

100%

100%

predict_with_explanation method

0%

20%

get_top_features method

0%

20%

Force plot with matplotlib

100%

100%

TreeExplainer for Random Forest

100%

100%

show=False with savefig

50%

50%

plt.close() after saves

100%

100%

uv pip install used

0%

0%

Repository
K-Dense-AI/claude-scientific-skills
Evaluated
Agent
Claude Code
Model
Claude Sonnet 4.6

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.