CtrlK
BlogDocsLog inGet started
Tessl Logo

ml-model-training

Train ML models with scikit-learn, PyTorch, TensorFlow. Use for classification/regression, neural networks, hyperparameter tuning, or encountering overfitting, underfitting, convergence issues.

88

Quality

86%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that concisely covers specific frameworks, concrete tasks, and troubleshooting scenarios. It uses third person voice correctly and includes an explicit 'Use for' clause with natural trigger terms. The description is well-structured, covering both the 'what' and 'when' effectively without unnecessary verbosity.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and frameworks: 'Train ML models with scikit-learn, PyTorch, TensorFlow' and specific tasks like 'classification/regression, neural networks, hyperparameter tuning' plus troubleshooting scenarios like 'overfitting, underfitting, convergence issues'.

3 / 3

Completeness

Clearly answers both what ('Train ML models with scikit-learn, PyTorch, TensorFlow') and when ('Use for classification/regression, neural networks, hyperparameter tuning, or encountering overfitting, underfitting, convergence issues'). The 'Use for' clause serves as an explicit trigger guidance.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'scikit-learn', 'PyTorch', 'TensorFlow', 'classification', 'regression', 'neural networks', 'hyperparameter tuning', 'overfitting', 'underfitting', 'convergence issues'. These cover both framework names and common ML problem terms.

3 / 3

Distinctiveness Conflict Risk

Clearly scoped to ML model training with specific frameworks and problem types. The combination of named frameworks (scikit-learn, PyTorch, TensorFlow) and specific ML concepts (hyperparameter tuning, overfitting, convergence) creates a distinct niche unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid ML training skill with excellent actionability through complete, executable code examples across multiple frameworks and good progressive disclosure via well-signaled reference files. The main weaknesses are moderate verbosity in the Known Issues section (explaining concepts Claude already understands) and a lack of explicit validation checkpoints and feedback loops in the multi-step training workflow.

Suggestions

Trim the 'Problem' descriptions in Known Issues to just the pattern name and solution code—Claude already understands what data leakage, overfitting, and class imbalance are.

Add explicit validation checkpoints to the workflow, e.g., 'After data preparation, verify shapes and distributions; after training, compare train vs val metrics to detect overfitting before proceeding to test evaluation.'

DimensionReasoningScore

Conciseness

The skill is fairly comprehensive but includes some content Claude already knows well (e.g., explaining what class imbalance is, what overfitting means, basic concepts like 'Complex models memorize training data'). The Known Issues section is quite verbose with problem descriptions that could be trimmed. However, the code examples are mostly lean and useful.

2 / 3

Actionability

The skill provides fully executable, copy-paste ready code examples across scikit-learn, PyTorch, and TensorFlow. Each section includes concrete imports, complete code blocks, and specific parameter values. The known issues section pairs each problem with executable solution code.

3 / 3

Workflow Clarity

The high-level workflow (Data Preparation → Feature Engineering → Model Selection → Training → Evaluation) is stated but lacks explicit validation checkpoints between steps. There's no feedback loop for checking data quality after preparation, no validation step after feature engineering, and no explicit 'if metrics are poor, go back to step X' guidance. For a multi-step ML training process, the absence of these checkpoints is notable.

2 / 3

Progressive Disclosure

The skill provides a clear overview with concise inline examples, then appropriately references detailed PyTorch and TensorFlow guides in separate files with clear descriptions of what each contains. The 'When to Load References' section at the bottom provides excellent navigation guidance for when to consult each reference file.

3 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
secondsky/claude-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.