Model Evaluation Metrics - Auto-activating skill for ML Training. Triggers on: model evaluation metrics, model evaluation metrics Part of the ML Training skill category.
32
0%
Does it follow best practices?
Impact
92%
1.00xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/07-ml-training/model-evaluation-metrics/SKILL.mdClassification metrics evaluation
Uses Python
100%
100%
Uses pip for install
75%
0%
Uses sklearn metrics
100%
100%
Accuracy computed
100%
100%
Precision and Recall
100%
100%
F1 Score
100%
100%
ROC-AUC metric
100%
100%
Confusion matrix
100%
100%
Output validation
100%
100%
Production-ready structure
100%
100%
Data preparation step
100%
100%
Experiment tracking and model comparison
Uses Python
100%
100%
Uses pip or requirements.txt
50%
25%
Uses sklearn
100%
100%
Multiple models trained
100%
100%
Data preparation included
100%
100%
Experiment tracking structure
100%
100%
Metrics per experiment
100%
100%
Comparison output
100%
100%
Best model identified
100%
100%
Production-ready structure
100%
100%
Output validation
100%
100%
Regression metrics and hyperparameter evaluation
Uses Python
100%
100%
Uses pip for dependencies
0%
0%
Uses sklearn
100%
100%
MSE or RMSE computed
100%
100%
MAE computed
100%
100%
R-squared computed
100%
100%
Hyperparameter variation
100%
100%
Hyperparameter impact tracked
100%
100%
Data preparation step
66%
66%
Production-ready structure
0%
100%
Structured output
100%
100%
3e83543
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.