CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

explaining-machine-learning-models

tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill explaining-machine-learning-models

Build this skill enables AI assistant to provide interpretability and explainability for machine learning models. it is triggered when the user requests explanations for model predictions, insights into feature importance, or help understanding model behavior... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.

35%

Overall

SKILL.md
Review
Evals

Validation

81%
CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

metadata_version

'metadata' field is not a dictionary

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

13

/

16

Passed

Implementation

13%

This skill content is largely boilerplate with minimal actionable guidance. It describes model explainability concepts at a high level without providing any executable code, specific library usage, or concrete implementation details. The generic placeholder sections (Prerequisites, Instructions, Error Handling, Resources) add no value and waste tokens.

Suggestions

Replace abstract descriptions with executable Python code showing actual SHAP/LIME usage (e.g., `import shap; explainer = shap.TreeExplainer(model); shap_values = explainer.shap_values(X)`)

Remove generic boilerplate sections (Prerequisites, Error Handling, Resources) that contain no specific information, or populate them with actual content

Add concrete validation steps such as checking explanation consistency, verifying feature alignment with model inputs, and sanity-checking SHAP value sums

Convert the 'Examples' section from descriptions of what the skill will do into actual code snippets with sample inputs and expected outputs

DimensionReasoningScore

Conciseness

Extremely verbose with extensive padding. Explains obvious concepts Claude already knows (what SHAP/LIME are, what model explainability means), includes generic boilerplate sections ('Prerequisites', 'Error Handling', 'Resources') with no actual content, and the 'Instructions' section is completely generic placeholder text.

1 / 3

Actionability

No executable code, no concrete commands, no specific library imports or function calls. Examples describe what 'the skill will do' abstractly rather than providing actual implementation. Phrases like 'Calculate SHAP values' give no guidance on how to actually do this.

1 / 3

Workflow Clarity

The 'How It Works' section provides a basic sequence (analyze, select technique, generate, present), and examples show numbered steps. However, there are no validation checkpoints, no error recovery guidance, and no concrete verification steps for ensuring explanations are valid or accurate.

2 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. Generic sections like 'Resources' mention 'Project documentation' and 'Related skills' without any actual links. Content that could be split (examples, best practices) is all inline with no clear navigation structure.

1 / 3

Total

5

/

12

Passed

Activation

33%

This description suffers from placeholder boilerplate text that undermines its utility. While it identifies the ML interpretability domain and mentions some relevant concepts, the 'Use when' clause is completely generic filler that provides zero guidance for skill selection. The description also incorrectly uses 'Build this skill enables AI assistant' framing rather than third-person action verbs.

Suggestions

Replace the placeholder 'Use when appropriate context detected. Trigger with relevant phrases based on skill purpose' with specific triggers like 'Use when user asks to explain predictions, understand why a model made a decision, visualize feature importance, or mentions SHAP, LIME, or model explainability'.

Rewrite the opening to use third-person voice with concrete actions: 'Provides interpretability and explainability for ML models by generating SHAP values, feature importance plots, and prediction explanations.'

Add specific tool/technique keywords users would naturally mention: 'SHAP', 'LIME', 'partial dependence plots', 'explain this prediction', 'why did the model predict X'.

DimensionReasoningScore

Specificity

Names the domain (ML interpretability/explainability) and mentions some actions like 'explanations for model predictions' and 'feature importance', but lacks concrete specific actions like 'generate SHAP plots' or 'compute feature attribution scores'.

2 / 3

Completeness

The 'what' is partially addressed but weak, and the 'when' clause is completely useless boilerplate ('Use when appropriate context detected') that provides no actual guidance on when to trigger this skill.

1 / 3

Trigger Term Quality

Includes some relevant terms like 'model predictions', 'feature importance', 'model behavior', but the generic 'Use when appropriate context detected. Trigger with relevant phrases based on skill purpose' is placeholder text that adds no value and misses natural user phrases like 'why did the model predict', 'explain this prediction', 'SHAP', 'LIME'.

2 / 3

Distinctiveness Conflict Risk

The ML interpretability focus provides some distinction, but 'model behavior' and 'predictions' are broad enough to potentially conflict with general ML/data science skills. The placeholder trigger text does nothing to establish clear boundaries.

2 / 3

Total

7

/

12

Passed

Reviewed

Table of Contents

ValidationImplementationActivation

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.