CtrlK
BlogDocsLog inGet started
Tessl Logo

shap

Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.

79

1.29x
Quality

75%

Does it follow best practices?

Impact

80%

1.29x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/shap/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines its scope around SHAP-based model interpretability, provides explicit trigger conditions with a 'Use this skill when...' clause, and includes rich natural keywords spanning plot types, frameworks, and use cases. It is highly specific, complete, and distinctive, making it easy for Claude to select appropriately from a large skill set.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: explaining predictions, computing feature importance, generating specific SHAP plot types (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing bias/fairness, comparing models. Also enumerates specific compatible model types.

3 / 3

Completeness

Clearly answers both 'what' (model interpretability using SHAP, computing feature importance, generating plots, debugging, bias analysis) and 'when' with explicit trigger guidance ('Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots...').

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: 'SHAP', 'feature importance', 'explainability', 'interpretability', 'model bias', 'fairness', specific plot types, specific framework names (XGBoost, LightGBM, TensorFlow, PyTorch), 'explainable AI', 'black-box model'. These are all terms a user would naturally use when needing this skill.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche around SHAP-based model interpretability. The specific mention of SHAP, particular plot types, and the explainability domain make it very unlikely to conflict with other skills like general ML training or data visualization skills.

3 / 3

Total

12

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is highly actionable with excellent executable code examples covering many use cases, but it is far too verbose — it explains concepts Claude already knows, repeats information across sections, and includes content inline that should be in reference files. Workflow clarity suffers because most workflows defer to a reference file without providing validation checkpoints in the main skill. The progressive disclosure structure is reasonable in concept but undermined by the bloated main file and missing bundle files.

Suggestions

Cut the 'Overview', 'When to Use This Skill', 'Key Concepts', and 'Additional Resources' sections entirely — Claude already knows what SHAP is and when to use it. This alone would halve the file length.

Move 'Performance Optimization', 'Troubleshooting', 'Integration with Other Tools', and 'Best Practices Summary' into reference files to keep SKILL.md focused on the quick start and core patterns.

Add explicit validation checkpoints to workflows — e.g., after computing SHAP values, verify the additivity property (shap_values sum ≈ prediction - base_value) before proceeding to visualization.

Remove the 'Reference Documentation' section's verbose descriptions of each file's contents — replace with a simple table or bullet list of file names and one-line descriptions.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~400+ lines. It explains concepts Claude already knows (what SHAP is, what Shapley values are, what PDFs are equivalent-level explanations of ML concepts), includes a 'When to Use This Skill' section listing trigger phrases that are unnecessary, repeats information across sections (e.g., explainer types listed multiple times), and includes an overview section that restates the description. The 'Key Concepts' section explains basic SHAP theory Claude already knows. The 'Reference Documentation' section extensively describes what's in each reference file rather than just linking to them.

1 / 3

Actionability

The skill provides fully executable, copy-paste ready Python code throughout. Examples cover setup, computation, visualization, cohort comparison, debugging, MLflow integration, and production API patterns. Code snippets are complete and use real library APIs correctly.

3 / 3

Workflow Clarity

Multiple workflows are listed with clear steps, but most workflows (2-6) defer to references/workflows.md for actual details, leaving only step titles without validation checkpoints or feedback loops. Workflow 1 has code but lacks validation steps. The debugging workflow doesn't include explicit verification that fixes resolved the issue.

2 / 3

Progressive Disclosure

The skill references four well-organized reference files (explainers.md, plots.md, workflows.md, theory.md) with clear descriptions of when to load each. However, no bundle files were provided, so the references may not exist. Additionally, too much content is inline in the SKILL.md itself — the Key Concepts, Performance Optimization, Troubleshooting, and Integration sections could be in reference files, making the main file leaner.

2 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (565 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.