Model Explainability Tool - Auto-activating skill for ML Training. Triggers on: model explainability tool, model explainability tool Part of the ML Training skill category.
36
Quality
3%
Does it follow best practices?
Impact
100%
1.00xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/model-explainability-tool/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is severely underdeveloped, functioning more as a placeholder than a useful skill description. It lacks any concrete actions, meaningful trigger terms, or guidance on when Claude should select this skill. The redundant trigger term and absence of capability details make it nearly useless for skill selection among multiple options.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Generates SHAP values, feature importance plots, partial dependence plots, and model prediction explanations'
Include a 'Use when...' clause with natural user language: 'Use when the user asks why a model made a prediction, wants to understand feature importance, or needs to debug model behavior'
Add varied trigger terms users would naturally say: 'explain predictions', 'feature importance', 'model interpretability', 'SHAP', 'LIME', 'why did the model predict'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description only names the tool ('Model Explainability Tool') without describing any concrete actions. There are no verbs indicating what the skill actually does - no 'analyzes', 'generates', 'visualizes', or similar action words. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' (no capabilities listed) and the 'when' clause is essentially just the skill name repeated. There is no explicit 'Use when...' guidance with meaningful triggers. | 1 / 3 |
Trigger Term Quality | The trigger terms are redundant ('model explainability tool' repeated twice) and overly technical. Users are unlikely to say 'model explainability tool' naturally - they might say 'explain model predictions', 'feature importance', 'SHAP values', or 'why did the model predict this'. | 1 / 3 |
Distinctiveness Conflict Risk | The mention of 'ML Training skill category' and 'model explainability' provides some domain specificity, but without concrete actions or clear triggers, it could overlap with other ML-related skills. The niche is identifiable but poorly defined. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is an empty template with no substantive content. It describes what a model explainability skill should do without providing any actual guidance, code examples, tool recommendations (SHAP, LIME, etc.), or workflows. The content would be useless for helping Claude perform model explainability tasks.
Suggestions
Add concrete code examples for common explainability tools (e.g., SHAP values, LIME explanations, feature importance extraction)
Include a clear workflow for when to use different explainability techniques (local vs global explanations, model-agnostic vs model-specific)
Provide executable examples showing how to generate and interpret explanations for different model types (tree-based, neural networks, linear models)
Remove the generic boilerplate sections ('Purpose', 'Capabilities', 'Example Triggers') and replace with actionable technical content
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is padded with generic boilerplate that provides no actual information about model explainability tools. Every section describes what the skill does abstractly rather than providing concrete guidance Claude could use. | 1 / 3 |
Actionability | No concrete code, commands, or specific techniques are provided. The content only describes capabilities in vague terms ('provides step-by-step guidance', 'generates production-ready code') without actually delivering any of them. | 1 / 3 |
Workflow Clarity | No workflow, steps, or process is defined. The skill claims to provide 'step-by-step guidance' but contains zero actual steps for implementing model explainability. | 1 / 3 |
Progressive Disclosure | No references to detailed materials, no links to examples or advanced content. The structure exists but contains only placeholder-level content with no actual information to disclose. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
f17dd51
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.