CtrlK
BlogDocsLog inGet started
Tessl Logo

feature-importance-analyzer

Feature Importance Analyzer - Auto-activating skill for ML Training. Triggers on: feature importance analyzer, feature importance analyzer Part of the ML Training skill category.

34

1.11x
Quality

3%

Does it follow best practices?

Impact

90%

1.11x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/feature-importance-analyzer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is extremely weak—it reads more like a label than a functional description. It provides no concrete actions, no meaningful trigger terms beyond a duplicated phrase, and no explicit guidance on when Claude should select this skill. It would be nearly indistinguishable from other ML skills in a large skill library.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Computes and visualizes feature importance using SHAP values, permutation importance, and tree-based importance scores for trained ML models.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about which features matter, feature ranking, variable importance, SHAP analysis, or understanding model predictions.'

Include diverse natural-language trigger terms users might say, such as 'feature selection', 'important features', 'SHAP values', 'permutation importance', 'model interpretability', 'feature ranking'.

DimensionReasoningScore

Specificity

The description names a domain ('Feature Importance Analyzer', 'ML Training') but provides no concrete actions. There is no indication of what the skill actually does—no verbs describing specific capabilities like 'ranks features', 'computes SHAP values', 'generates importance plots', etc.

1 / 3

Completeness

The description fails to answer both 'what does this do' and 'when should Claude use it'. There is no explanation of capabilities and no explicit 'Use when...' clause with meaningful trigger guidance.

1 / 3

Trigger Term Quality

The trigger terms listed are just 'feature importance analyzer' repeated twice. There are no natural user-language variations such as 'feature selection', 'variable importance', 'SHAP', 'permutation importance', 'which features matter', etc.

1 / 3

Distinctiveness Conflict Risk

The term 'Feature Importance Analyzer' is somewhat specific to a niche within ML, which provides some distinctiveness. However, the vague 'ML Training' category and lack of concrete scope could cause overlap with other ML-related skills.

2 / 3

Total

5

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is an empty template with no actual instructional content. It repeatedly references 'feature importance analyzer' without ever defining what it means, how to do it, or providing any executable code or concrete guidance. It fails on every dimension of the rubric.

Suggestions

Add concrete, executable code examples for feature importance techniques (e.g., SHAP values, permutation importance with sklearn, tree-based feature importances) with specific imports and function calls.

Define a clear workflow: e.g., 1) Train model, 2) Compute feature importances using method X, 3) Validate results, 4) Visualize/report—with actual code at each step.

Remove all meta-description sections (Purpose, When to Use, Capabilities, Example Triggers) that describe the skill rather than teaching the task. Replace with actionable content.

Add specific examples showing input data format, expected output (e.g., ranked feature list, importance scores), and interpretation guidance.

DimensionReasoningScore

Conciseness

The content is entirely filler and meta-description. It explains what the skill does in abstract terms without providing any actual technical content. Every section restates the same vague idea ('feature importance analyzer') without adding substance.

1 / 3

Actionability

There is zero concrete guidance—no code, no commands, no specific techniques, no examples of feature importance methods (e.g., SHAP, permutation importance, tree-based importance). It only describes rather than instructs.

1 / 3

Workflow Clarity

No workflow, steps, or process is defined. The skill claims to provide 'step-by-step guidance' but contains none. There are no validation checkpoints or sequenced instructions.

1 / 3

Progressive Disclosure

The content is a flat, repetitive document with no references to detailed materials, no links to examples or advanced guides, and no meaningful structural organization beyond boilerplate headings.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.