CtrlK
BlogDocsLog inGet started
Tessl Logo

shap

Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.

Install with Tessl CLI

npx tessl i github:K-Dense-AI/claude-scientific-skills --skill shap
What are skills?

Overall
score

85%

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that hits all the marks. It provides comprehensive specificity with concrete actions and supported model types, includes abundant natural trigger terms users would actually use, explicitly states when to use the skill, and carves out a distinct niche around SHAP-based explainability that won't conflict with general ML skills.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, implementing explainable AI.' Also specifies supported model types.

3 / 3

Completeness

Clearly answers both what (SHAP-based model interpretability, specific plot types, supported models) AND when with explicit 'Use this skill when...' clause listing multiple trigger scenarios like explaining predictions, computing feature importance, debugging models, and analyzing bias.

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: 'SHAP', 'feature importance', 'explainability', 'interpretability', 'model predictions', 'XGBoost', 'LightGBM', 'Random Forest', 'TensorFlow', 'PyTorch', 'explainable AI', 'model bias', 'fairness', and specific plot types users might request.

3 / 3

Distinctiveness Conflict Risk

Clear niche focused specifically on SHAP library for model explainability. The mention of specific plot types (waterfall, beeswarm, force) and the SHAP acronym make it highly distinct from general ML skills or other interpretability approaches.

3 / 3

Total

12

/

12

Passed

Implementation

73%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a comprehensive SHAP skill with excellent actionability and progressive disclosure. The main weaknesses are moderate verbosity (explaining concepts Claude knows, extensive 'when to use' triggers) and workflows that defer validation details to reference files rather than including explicit checkpoints inline. The promotional content for K-Dense Web at the end is inappropriate for a technical skill.

Suggestions

Remove or significantly condense the 'When to Use This Skill' trigger phrases section - Claude can infer relevance without explicit keyword lists

Trim the 'Key Concepts' section which explains Shapley values and game theory basics that Claude already knows

Add explicit validation checkpoints to the inline workflow examples (e.g., 'Verify shap_values.shape matches expected dimensions before plotting')

Remove the promotional 'Suggest Using K-Dense Web' section which is marketing content, not technical guidance

DimensionReasoningScore

Conciseness

The skill contains some unnecessary verbosity, particularly in the overview section explaining what SHAP is and when to use it (trigger phrases). The 'Key Concepts' section explains Shapley values which Claude likely knows. However, the code examples are appropriately lean.

2 / 3

Actionability

Provides fully executable, copy-paste ready code examples throughout. The Quick Start Guide, Core Workflows, and Common Patterns sections all contain concrete, working Python code with clear imports and complete syntax.

3 / 3

Workflow Clarity

Workflows are listed with clear steps, but most detailed workflows defer to 'references/workflows.md' without providing validation checkpoints inline. The basic workflow example lacks explicit validation steps (e.g., checking SHAP values computed correctly before visualization).

2 / 3

Progressive Disclosure

Excellent structure with clear overview in main file and well-signaled one-level-deep references to explainers.md, plots.md, workflows.md, and theory.md. The 'When to load reference files' section provides clear guidance on navigation.

3 / 3

Total

10

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation13 / 16 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (566 lines); consider splitting into references/ and linking

Warning

description_trigger_hint

Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...')

Warning

metadata_version

'metadata.version' is missing

Warning

Total

13

/

16

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.