Train ML models with scikit-learn, PyTorch, TensorFlow. Use for classification/regression, neural networks, hyperparameter tuning, or encountering overfitting, underfitting, convergence issues.
Install with Tessl CLI
npx tessl i github:secondsky/claude-skills --skill ml-model-training94
Does it follow best practices?
Validation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that effectively communicates both capabilities and trigger conditions. It names specific frameworks (scikit-learn, PyTorch, TensorFlow), concrete tasks (classification, regression, neural networks), and common problem scenarios (overfitting, underfitting, convergence issues). The description uses appropriate third-person voice and provides clear differentiation from other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Train ML models', 'classification/regression', 'neural networks', 'hyperparameter tuning' and specific problem types like 'overfitting, underfitting, convergence issues'. | 3 / 3 |
Completeness | Clearly answers both what ('Train ML models with scikit-learn, PyTorch, TensorFlow') and when ('Use for classification/regression, neural networks, hyperparameter tuning, or encountering overfitting, underfitting, convergence issues') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would say: 'ML models', 'scikit-learn', 'PyTorch', 'TensorFlow', 'classification', 'regression', 'neural networks', 'hyperparameter tuning', 'overfitting', 'underfitting' - these are terms practitioners naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused on ML model training with specific frameworks and problem types; unlikely to conflict with general data analysis or other coding skills due to explicit ML/deep learning terminology. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
87%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong skill with excellent actionability and conciseness, providing executable code examples and practical solutions for common ML training issues. The progressive disclosure is well-implemented with clear references to detailed framework guides. The main weakness is the workflow section, which lists steps but lacks explicit validation checkpoints or feedback loops for catching issues during the training pipeline.
Suggestions
Add explicit validation checkpoints to the workflow, such as 'Verify data shapes after splitting' or 'Check for NaN values in scaled features before training'
Include a feedback loop for model evaluation: 'If validation metrics are poor → check for data issues → adjust hyperparameters → retrain'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, providing executable code without explaining basic concepts Claude already knows. Every section delivers actionable information without padding. | 3 / 3 |
Actionability | Provides fully executable, copy-paste ready code examples for data preparation, scikit-learn, and PyTorch training. Includes concrete solutions for common problems with correct/incorrect code comparisons. | 3 / 3 |
Workflow Clarity | The workflow is listed (Data Preparation → Feature Engineering → Model Selection → Training → Evaluation) but lacks explicit validation checkpoints between steps. No feedback loops for catching training failures or model quality issues before proceeding. | 2 / 3 |
Progressive Disclosure | Excellent structure with a clear overview, inline essentials, and well-signaled one-level-deep references to PyTorch and TensorFlow detailed guides. The 'When to Load References' section provides clear navigation guidance. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.