CtrlK
BlogDocsLog inGet started
Tessl Logo

scikit-learn

Machine learning in Python with scikit-learn. Use when working with supervised learning (classification, regression), unsupervised learning (clustering, dimensionality reduction), model evaluation, hyperparameter tuning, preprocessing, or building ML pipelines. Provides comprehensive reference documentation for algorithms, preprocessing techniques, pipelines, and best practices.

88

1.10x
Quality

75%

Does it follow best practices?

Impact

98%

1.10x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/scikit-learn/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly identifies its domain (scikit-learn ML in Python), lists comprehensive specific capabilities, and includes an explicit 'Use when...' clause with rich trigger terms. It uses proper third-person voice throughout and covers both the 'what' and 'when' dimensions effectively. The description is well-structured and would be easily distinguishable from other skills in a large skill library.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and domains: supervised learning (classification, regression), unsupervised learning (clustering, dimensionality reduction), model evaluation, hyperparameter tuning, preprocessing, and building ML pipelines.

3 / 3

Completeness

Clearly answers both 'what' (ML in Python with scikit-learn, reference documentation for algorithms, preprocessing, pipelines, best practices) and 'when' (explicit 'Use when...' clause listing specific trigger scenarios like supervised learning, unsupervised learning, model evaluation, etc.).

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: 'machine learning', 'Python', 'scikit-learn', 'classification', 'regression', 'clustering', 'dimensionality reduction', 'model evaluation', 'hyperparameter tuning', 'preprocessing', 'ML pipelines'. These are all terms users naturally use when requesting ML help.

3 / 3

Distinctiveness Conflict Risk

Clearly scoped to scikit-learn specifically, with distinct triggers around ML-specific tasks. The combination of 'scikit-learn', 'Python', and specific ML subtasks like 'hyperparameter tuning' and 'ML pipelines' makes it unlikely to conflict with general Python or data analysis skills.

3 / 3

Total

12

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides excellent actionable code examples and good progressive disclosure structure with reference files, but is severely undermined by verbosity. It explains concepts Claude already knows (what classification/regression are, what algorithms exist), duplicates content between sections (Quick Start vs Common Workflows), and includes extensive lists that merely restate scikit-learn's own documentation rather than adding unique value.

Suggestions

Cut the 'When to Use This Skill' section entirely and remove all 'When to use' subsections — Claude knows when to use classification vs regression.

Remove the algorithm listing bullets from Core Capabilities (they're just restating sklearn docs) and rely on the reference files for that detail.

Consolidate Quick Start and Common Workflows into a single section — they cover the same ground with slightly different examples.

Add explicit validation checkpoints to workflows, e.g., 'Check X_train.shape and y_train.value_counts() before proceeding' and 'Verify cross-validation scores are stable before final evaluation'.

DimensionReasoningScore

Conciseness

Extremely verbose at ~350+ lines. Includes extensive 'When to use' sections explaining concepts Claude already knows (what classification is, what clustering is), lists of algorithm names that are just restating scikit-learn's documentation, a 'When to Use This Skill' section that's unnecessary, and redundant content between Quick Start, Common Workflows, and Best Practices sections. The 'Benefits' list for pipelines and many 'When to use' bullets are padding.

1 / 3

Actionability

Provides fully executable, copy-paste ready code examples throughout — classification pipelines, clustering analysis, preprocessing, hyperparameter tuning, and troubleshooting with concrete solutions. Code is complete with proper imports and realistic parameters.

3 / 3

Workflow Clarity

The 'Building a Classification Model' and 'Performing Clustering Analysis' workflows have clear numbered steps with code, but lack explicit validation checkpoints. There's no 'verify your data shape', 'check for class imbalance before proceeding', or 'validate pipeline output' steps. The best practices section partially compensates but isn't integrated into the workflows.

2 / 3

Progressive Disclosure

References to external files (references/*.md, scripts/*.py) are well-signaled and one level deep, which is good. However, the main SKILL.md contains far too much inline content that should be in reference files — the full algorithm listings, all preprocessing techniques, and detailed troubleshooting could be offloaded. The Reference Documentation section essentially duplicates the Core Capabilities section's content descriptions.

2 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (520 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.