CtrlK
BlogDocsLog inGet started
Tessl Logo

explaining-machine-learning-models

Build this skill enables AI assistant to provide interpretability and explainability for machine learning models. it is triggered when the user requests explanations for model predictions, insights into feature importance, or help understanding model behavior... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.

27

Quality

11%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/ai-ml/model-explainability-tool/skills/explaining-machine-learning-models/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

22%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description suffers from vague, abstract language and a completely generic 'Use when' clause that provides no actionable trigger guidance. It identifies the ML interpretability domain but fails to list concrete actions or specific techniques. The boilerplate trigger phrase ('Use when appropriate context detected') adds no value and suggests the description was auto-generated rather than thoughtfully crafted.

Suggestions

Replace the generic 'Use when appropriate context detected' with specific triggers like 'Use when the user asks to explain model predictions, compute SHAP values, visualize feature importance, or understand why a model made a specific decision.'

List concrete actions such as 'Generates SHAP explanations, computes feature importance rankings, creates partial dependence plots, produces LIME local explanations, and visualizes decision boundaries.'

Add natural trigger terms users would say, such as 'SHAP', 'LIME', 'explain prediction', 'why did the model predict', 'feature contributions', 'model transparency', and 'black box model'.

DimensionReasoningScore

Specificity

The description uses vague language like 'provide interpretability and explainability' and 'insights into feature importance' without listing concrete actions (e.g., generate SHAP plots, compute LIME explanations, produce partial dependence plots). It also uses first/second person framing ('enables AI assistant') which warrants a penalty, but the score is already at the minimum.

1 / 3

Completeness

While it weakly addresses 'what' (interpretability/explainability) and has a vague 'when' clause, the 'Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.' is completely generic boilerplate that provides no actual trigger guidance, effectively making the 'when' missing.

1 / 3

Trigger Term Quality

It includes some relevant keywords like 'model predictions', 'feature importance', and 'model behavior' that users might naturally say, but misses common variations like 'SHAP', 'LIME', 'explain predictions', 'why did the model predict', 'feature contributions', or specific model types.

2 / 3

Distinctiveness Conflict Risk

The ML interpretability/explainability domain is somewhat specific and wouldn't overlap with most other skills, but the vague language ('model behavior', 'understanding') could conflict with general ML or data science skills.

2 / 3

Total

6

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is almost entirely generic boilerplate with no actionable content. It describes the concept of model explainability at a high level but provides zero executable code for SHAP, LIME, or feature importance analysis. Multiple sections ('Prerequisites', 'Instructions', 'Output', 'Error Handling', 'Resources') appear to be template placeholders with no substantive content.

Suggestions

Replace the abstract examples with concrete, executable Python code showing SHAP value calculation (e.g., `shap.TreeExplainer(model).shap_values(X)`) and LIME explanations (e.g., `lime.lime_tabular.LimeTabularExplainer`)

Remove all generic placeholder sections (Prerequisites, Instructions, Output, Error Handling, Resources, Integration) that contain no skill-specific information

Add a decision matrix or concrete criteria for when to use SHAP vs LIME vs built-in feature importance (e.g., 'tree-based models → use shap.TreeExplainer; black-box models → use LIME')

Include specific visualization code examples (e.g., `shap.summary_plot()`, `shap.force_plot()`) instead of the vague 'use visualizations' best practice

DimensionReasoningScore

Conciseness

Extremely verbose with extensive explanation of concepts Claude already knows. Sections like 'How It Works', 'When to Use This Skill', 'Integration', 'Prerequisites', 'Instructions', 'Output', 'Error Handling', and 'Resources' are all generic filler that provide no actionable information. The skill explains what explainability is rather than how to do it.

1 / 3

Actionability

No executable code, no concrete commands, no specific library usage examples. The examples describe what the skill 'will do' in abstract terms rather than providing actual SHAP/LIME code snippets. Sections like 'Instructions' say 'Invoke this skill when trigger conditions are met' which is completely non-actionable.

1 / 3

Workflow Clarity

The 'How It Works' section lists abstract steps like 'Analyze Context' and 'Select Explanation Technique' without any concrete guidance on how to perform these steps. No validation checkpoints, no error recovery loops, no specific commands or decision criteria for choosing between SHAP and LIME.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files and no bundle files to support it. Content is poorly organized with multiple generic placeholder sections (Prerequisites, Instructions, Output, Error Handling, Resources) that contain no real information. No clear navigation structure.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.