CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

model-explainability-tool

Model Explainability Tool - Auto-activating skill for ML Training. Triggers on: model explainability tool, model explainability tool Part of the ML Training skill category.

Install with Tessl CLI

npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill model-explainability-tool
What are skills?

Overall
score

19%

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Activation

7%

This description is severely underdeveloped - it's essentially just a title with metadata rather than a functional skill description. It provides no concrete actions, has duplicate/redundant trigger terms, and lacks any 'Use when...' guidance. Claude would struggle to know when to select this skill or what capabilities it provides.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Generates SHAP values, visualizes feature importance, creates partial dependence plots, explains individual predictions'

Add a 'Use when...' clause with natural trigger terms like 'Use when the user asks to explain model predictions, understand feature importance, interpret ML model behavior, or debug model decisions'

Include natural user phrases as triggers: 'explain why the model predicted', 'feature importance', 'model interpretation', 'SHAP', 'LIME', 'black box explanation'

DimensionReasoningScore

Specificity

The description only names the tool ('Model Explainability Tool') without describing any concrete actions. There are no verbs indicating what the skill actually does - no mention of specific capabilities like 'generates explanations', 'visualizes feature importance', or 'analyzes model predictions'.

1 / 3

Completeness

The description fails to answer 'what does this do' beyond naming itself, and the 'when' guidance is just a duplicate of the skill name. There is no explicit 'Use when...' clause or meaningful trigger guidance.

1 / 3

Trigger Term Quality

The trigger terms are just the skill name repeated twice ('model explainability tool, model explainability tool'). Missing natural user phrases like 'explain model', 'feature importance', 'SHAP values', 'interpret predictions', 'why did the model predict', etc.

1 / 3

Distinctiveness Conflict Risk

The term 'Model Explainability' is somewhat specific to a niche domain (ML interpretability), which provides some distinctiveness. However, without concrete actions described, it could still overlap with general ML or data science skills.

2 / 3

Total

5

/

12

Passed

Implementation

0%

This skill content is essentially a placeholder template with no actual substance. It contains zero actionable information about model explainability - no mention of specific tools (SHAP, LIME, Captum, etc.), no code examples, no workflows for generating explanations, and no guidance on interpreting results. The content would be completely useless for helping anyone with model explainability tasks.

Suggestions

Add concrete code examples for at least one explainability library (e.g., SHAP feature importance, LIME local explanations) with executable Python code

Include a workflow for generating and validating model explanations: train model -> generate explanations -> validate consistency -> visualize results

Remove all generic boilerplate ('provides automated assistance', 'follows best practices') and replace with specific techniques: feature importance, partial dependence plots, attention visualization, etc.

Add references to detailed guides for different explainability approaches (global vs local, model-agnostic vs model-specific) with clear navigation

DimensionReasoningScore

Conciseness

The content is padded with generic boilerplate that provides no actual information about model explainability tools. Phrases like 'provides automated assistance' and 'follows industry best practices' are meaningless filler that Claude doesn't need.

1 / 3

Actionability

There is zero concrete guidance - no code, no commands, no specific techniques, no tool names (SHAP, LIME, etc.), no examples. The content describes what the skill supposedly does rather than instructing how to do anything.

1 / 3

Workflow Clarity

No workflow is provided whatsoever. Claims to provide 'step-by-step guidance' but contains no actual steps. There are no processes, sequences, or validation checkpoints for any explainability task.

1 / 3

Progressive Disclosure

The content is a monolithic block of vague marketing-style text with no structure pointing to detailed materials, no references to implementation guides, and no organization of explainability concepts or techniques.

1 / 3

Total

4

/

12

Passed

Validation

69%

Validation11 / 16 Passed

Validation for skill structure

CriteriaDescriptionResult

description_trigger_hint

Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...')

Warning

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

metadata_version

'metadata' field is not a dictionary

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

body_steps

No step-by-step structure detected (no ordered list); consider adding a simple workflow

Warning

Total

11

/

16

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.