Model Explainability Tool - Auto-activating skill for ML Training. Triggers on: model explainability tool, model explainability tool Part of the ML Training skill category.
34
0%
Does it follow best practices?
Impact
100%
1.00xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/model-explainability-tool/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a placeholder that provides no meaningful information about what the skill does or when it should be used. It repeats the skill name as its only trigger term and lacks any concrete actions, natural keywords, or explicit usage guidance. It would be nearly impossible for Claude to correctly select this skill from a pool of ML-related skills.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Generates SHAP values, feature importance plots, partial dependence plots, and LIME explanations for trained ML models.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about model interpretability, feature importance, SHAP values, explaining predictions, or understanding why a model made a specific decision.'
Remove the duplicate trigger term and expand with varied natural language terms users would actually say, such as 'explain model', 'feature contributions', 'prediction explanation', 'interpretable ML'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names a domain ('Model Explainability Tool') but provides no concrete actions. There is no indication of what the tool actually does—no verbs like 'generates', 'visualizes', 'computes', etc. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' beyond naming itself, and the 'when' clause is essentially just restating the skill name as a trigger. There is no explicit 'Use when...' guidance with meaningful triggers. | 1 / 3 |
Trigger Term Quality | The only trigger terms listed are 'model explainability tool' repeated twice. There are no natural user-facing keywords like 'SHAP', 'feature importance', 'interpretability', 'explain predictions', or other common variations a user would say. | 1 / 3 |
Distinctiveness Conflict Risk | The description is extremely generic within the ML domain. 'Model explainability tool' could overlap with many ML-related skills, and without specific actions or distinct triggers, it would be difficult to distinguish from other ML skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is an empty shell with no actionable content. It consists entirely of boilerplate meta-descriptions that repeat 'model explainability tool' without ever explaining what it is, how to use any explainability techniques, or providing any code or concrete guidance. It would provide zero value to Claude in performing any actual task.
Suggestions
Replace the boilerplate with actual executable examples using specific explainability libraries (e.g., SHAP values with `shap.Explainer`, LIME with `lime.lime_tabular`, or built-in feature importances) with copy-paste ready code.
Add a concrete workflow: e.g., 1. Train model → 2. Choose explainability method based on model type → 3. Generate explanations → 4. Validate explanations make sense → 5. Visualize/export results.
Remove all meta-sections (Purpose, When to Use, Example Triggers, Capabilities) that describe the skill itself rather than teaching how to do the task—these waste tokens without adding value.
Include a decision guide for choosing between explainability approaches (e.g., SHAP for tree models, LIME for black-box models, attention weights for transformers) with specific code for each.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is entirely filler and meta-description. It explains what the skill does in abstract terms without providing any actual knowledge or instructions. Every section restates the same vague concept ('model explainability tool') without adding substance. | 1 / 3 |
Actionability | There is zero concrete guidance—no code, no commands, no specific techniques, no tool recommendations (e.g., SHAP, LIME, integrated gradients). The content describes rather than instructs, offering only vague promises like 'provides step-by-step guidance' without actually providing any. | 1 / 3 |
Workflow Clarity | No workflow, steps, or process is defined. The skill claims to provide 'step-by-step guidance' but contains no steps whatsoever. There are no validation checkpoints or sequenced instructions. | 1 / 3 |
Progressive Disclosure | The content is a flat, monolithic block of meta-descriptions with no meaningful structure. There are no references to detailed files, no quick-start section, and no navigation to deeper content. The sections that exist (Purpose, When to Use, Capabilities, etc.) are all boilerplate with no real content. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3076d78
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.